-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Einsum capabilities #120
Comments
In your example, index J is contracted. |
Explicit Einstein summation that NumPy has, is not supported in Fastor currently. There is already discussion on how to support this (see #91). One way around this issue for now is to not contract the index that you want to retain that is auto res = Fastor::einsum<Fastor::Index<I,M>,Fastor::Index<N,K>, Fastor::OIndex<I,M,N,K>>(t1, t2); Now your problem is narrowed down to summing the |
Hi, thanks for the response. I am not sure what the summation process over M and N would look like in order to produce 'res'. It looks like the following code: In python this could be done with two einsum calls:
Hence in this case instead of summing over the contracted indices, you would place the output of the product of "ij,jk->ijk' in a tensor like so: Fastor::Tensor<double, 3,2,2> res; Is there a better way to achieve this? Thanks. |
Hi,
I am currently using Fastor to perform tensor contraction and I am unsure if it supports my example usage.
For example, given this Python code:
The result would produce a tensor of shape [3,2,2], which looks like:
Trying to replicate this behaviour in Fastor, I wrote the following code:
And this gives an error:
Is this supported?
Thanks.
The text was updated successfully, but these errors were encountered: