You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Einsum has been fantastic for computing tensor contractions. However, there are two additional supports I believe would greatly enhance its functionality and efficiency.
(1) Symmetries:
Let's consider a tensor contraction such as A[a,b,c,d,e] * B[d,e,f,g,h] = C[a,b,c,f,g,h]. If we have symmetries such as A[a,b,c,d,e] = A[b,a,c,d,e], or A[a,b,c,d,e]=A[a,b,c,e,d], being able to utilize these symmetries could significantly enhance computational efficiency.
A package called ctf provides some support for this, but it seems to be only for symmetric/antisymmetric for adjacent indices. The link below leads to its GitHub repository:
, when exporting MKL/OpenBLAS/OpenMP together with einsum for general contractions, the actual speed up is not quite realized, despite an increase in CPU load.
For binary tensor contraction, there are some packages, such as:
However, for more general applications (more than two), it seems we're still lacking comprehensive solutions.
If these two features could be incorporated - symmetries and parallelization - into einsum, I believe it would greatly enhance its functionality and computational efficiency.
The text was updated successfully, but these errors were encountered:
fellideal
changed the title
ENH: Symmetries and parallization support of einsum
ENH: Symmetries and parallelization support of einsum
Jul 8, 2023
@seberg I was looking up at older related threads and came across #14332. Since numpy doesn't natively support parallalization, apart from its integration with BLAS. I was thinking along the lines of solution provided in #14332 which calls for an integration with BLAS for contractions which rely on c_einsum backend call. Do you have any suggestions? Thank you !
Proposed new feature or change:
Einsum
has been fantastic for computing tensor contractions. However, there are two additional supports I believe would greatly enhance its functionality and efficiency.(1) Symmetries:
Let's consider a tensor contraction such as
A[a,b,c,d,e] * B[d,e,f,g,h] = C[a,b,c,f,g,h]
. If we have symmetries such asA[a,b,c,d,e] = A[b,a,c,d,e]
, orA[a,b,c,d,e]=A[a,b,c,e,d]
, being able to utilize these symmetries could significantly enhance computational efficiency.A package called
ctf
provides some support for this, but it seems to be only for symmetric/antisymmetric for adjacent indices. The link below leads to its GitHub repository:https://github.com/cyclops-community/ctf
Further details can be found in the following documentation:
https://solomon2.web.engr.illinois.edu/ctf_python/ctf.html#module-ctf.core
However, from what I've seen, the efficiency isn't as good as one might hope (as noted in the following GitHub issue):
cyclops-community/ctf#136
(2) Parallelization:
Although it seems that
tensordot
supports parallelizationhttps://stackoverflow.com/questions/23650449/numpy-np-einsum-array-multiplication-using-multiple-cores
, when exporting MKL/OpenBLAS/OpenMP together with
einsum
for general contractions, the actual speed up is not quite realized, despite an increase in CPU load.For binary tensor contraction, there are some packages, such as:
https://github.com/jackkamm/einsum2
However, for more general applications (more than two), it seems we're still lacking comprehensive solutions.
If these two features could be incorporated - symmetries and parallelization - into
einsum
, I believe it would greatly enhance its functionality and computational efficiency.The text was updated successfully, but these errors were encountered: