Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support constrained Hilbert spaces in ARNN #1659

Draft
wants to merge 14 commits into
base: master
Choose a base branch
from

Conversation

wdphy16
Copy link
Collaborator

@wdphy16 wdphy16 commented Nov 28, 2023

When defining an ARNN wave function on a constrained Hilbert space, we can first define an unconstrained wave function, then set the amplitude of the states outside the constrained space to zero, and finally reweight the remaining states so the wave function is normalized.

There can be two ways to do the reweighting. The first way is to multiply a global constant onto all the states, which I call 'global reweighting' here. The second way is to reweight the conditional probabilities in each AR sampling step, as described in Hibat-Allah et al. (2020), Appendix D.2, which I call 'conditional reweighting'.

For example, if the unconstrained wave function on two spin-1/2 sites has

p(↑↑) = p1, p(↑↓) = p2, p(↓↑) = p3, p(↓↓) = p4

then the wave function with the constraint total_sz = 0 and the global reweighting has

p(↑↓) = p2 / (p2 + p3), p(↓↑) = p3 / (p2 + p3)

while the one with the conditional reweighting has

p(↑↓) = p1 + p2, p(↓↑) = p3 + p4

The global reweighting works with not only ARNN but also other ansatzes, and it can be implemented by Markov chain samplers but not the direct sampler. On the other hand, the conditional reweighting works specifically with ARNN, and can be used with either Markov chain samplers or the direct sampler.

This PR implements the conditional reweighting in the ARNN model. We're planning to implement the global reweighting by a 'post processing' wrapper over any sampler, and it would be interesting to compare the two reweighting methods' expressive power and training convergence speed.

Copy link

codecov bot commented Nov 28, 2023

Codecov Report

Attention: 6 lines in your changes are missing coverage. Please review.

Comparison is base (56b029a) 82.68% compared to head (ac5b682) 82.76%.
Report is 4 commits behind head on master.

Files Patch % Lines
netket/models/autoreg.py 87.09% 2 Missing and 2 partials ⚠️
netket/sampler/autoreg.py 33.33% 1 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1659      +/-   ##
==========================================
+ Coverage   82.68%   82.76%   +0.08%     
==========================================
  Files         298      298              
  Lines       18299    18290       -9     
  Branches     3536     3519      -17     
==========================================
+ Hits        15130    15138       +8     
+ Misses       2481     2470      -11     
+ Partials      688      682       -6     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@gcarleo gcarleo requested review from Z-Denis and jwnys December 7, 2023 11:18
@wdphy16
Copy link
Collaborator Author

wdphy16 commented Jan 23, 2024

@Z-Denis @jwnys Do you have suggestions about this? If not maybe we can merge this

@Z-Denis
Copy link
Collaborator

Z-Denis commented Jan 23, 2024

I am not the biggest fan of the proliferation of specialised logic in classes that are probably going to be subclassed by users (e.g. ARNNSequential, FastARNNSequential). But if it's just a flag, why not.

@wdphy16 wdphy16 marked this pull request as draft January 24, 2024 09:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants