Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Circuit sampling with slicing depends on deleted Cotengra class #186

Open
juliendrapeau opened this issue Jun 12, 2023 · 4 comments
Open
Labels

Comments

@juliendrapeau
Copy link
Contributor

juliendrapeau commented Jun 12, 2023

What happened?

Hi,

I am trying to sample from large circuits with the function quimb.tensor.Circuit.sample(). Since the circuit is large, I need to use slicing with the argument target_size in order to use less memory. However, this process requires the class cotengra.Sliced_Contractor(), which was removed from Cotengra on April 24.

Is there a simple way to fix this issue?

Thank you,
Julien Drapeau

Environment

quimb-1.5.1.dev11+gf0c5ea8

@jcmgray
Copy link
Owner

jcmgray commented Jun 13, 2023

Hi @juliendrapeau,

Apologies! I had forgotten that the Circuit class relies on that. The long term answer is that slicing will be encapsulated in the optimize argument. So that if you supply a

opt = cotengra.ReusableHyperOptimizer(
    slicing_opts=dict(...),
)

then quimb will use it to both find the contraction path and also see that it can use cotengra to contract the network directly using the newer implementation of performing sliced contraction.

This is I think already set-up, but I need to double check.

@juliendrapeau
Copy link
Contributor Author

Hi @jcmgray,

Thank you for your response. This seems to be set-up and working as far as I can tell.

Related to this issue, in the description of the optimize argument for the sample function, it is said that "Contraction path optimizer to use for the marginals, shouldn’t be a reusable path optimizer as called on many different TNs". To be sure, does that implies that we need to include overwrite=True in the ReusableHyperOptimizer?

Thank you,
Julien Drapeau

@jcmgray
Copy link
Owner

jcmgray commented Jun 15, 2023

I would need to double check, but I think that is just a typo, where in fact it should be a reusable optimizer. The overwrite arg is really for advanced usage where you want to ignore any matching tree that is already in the cache.

@juliendrapeau
Copy link
Contributor Author

All right!

Thank you again,
Julien Drapeau

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants