Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tensor_contract() interface prevents turning off contraction path cache #112

Open
mattorourke17 opened this issue Feb 21, 2022 · 2 comments

Comments

@mattorourke17
Copy link
Contributor

When calling TensorNetwork.contract() or any related function that filters down to tensor_contract(), I don't think it is currently possible to shut off caching of contraction expressions. If the user supplies cache=False as a kwarg that eventually reaches tensor_contract(..., **contract_opts), it gets passed to get_contraction(eq, *shapes, cache=True, get='expr', optimize=None, **kwargs) in the kwargs, whereas cache is already manually set in the function call. The body of get_contraction() does not catch the case that a cache value is passed in **kwargs, and ends up always passing True as the cache value when calling _get_contraction().

It is important to be able to shut off caching from the high-level interfaces because sometimes a user might want to contract two networks with the same opt_einsum expression but different values of the bond dimensions, in which case opt_einsum throws an error like ValueError: Size of label 'g' for operand 6 (2) does not match previous terms (3)

I think this can either be fixed by checking if 'cache' in kwargs in get_contraction() or by checking if 'cache' in contract_opts in tensor_contract()

@mattorourke17
Copy link
Contributor Author

Looking closer at opt_einsum, I actually don't think the error I quoted is related to the cache kwarg propagation issue

@jcmgray
Copy link
Owner

jcmgray commented Feb 21, 2022

quimb should cache on both the contraction equation and sizes - and the returned expression should be able to handle any dimensions changing - so not totally clear the source of the error, but it implies that two different tensors in the same contraction have index dimensions that don't match.

Having said that, I have some refactoring of the contraction parsing stuff that I need to push, including optionally using cotengra rather than opt_einsum, to perform the contraction which has various advantages and might be easier to understand.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants