-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Initial LPS work #60
base: main
Are you sure you want to change the base?
Conversation
Hello @AidanGG! Thanks for opening this PR. We checked the lines you've touched for PEP 8 issues, and found:
|
Codecov Report
@@ Coverage Diff @@
## develop #60 +/- ##
===========================================
- Coverage 86.23% 84.47% -1.77%
===========================================
Files 32 32
Lines 8684 8882 +198
===========================================
+ Hits 7489 7503 +14
- Misses 1195 1379 +184
Continue to review full report at Codecov.
|
Hi @AidanGG. Possibly I am missing something but a bit confused about what's been added here. The LPS is a double row 1D operator (i.e. shouldn't mixin methods from I think what would be helpful would be an example of the top level functionality and design that would be useful to you. E.g. lps = qtn.LocallyPurifiedState.rand(100, bond_dim=8)
lps.show()
| | | | | |
O--O--O--O--O--O--
| | | | | | ... etc
O--O--O--O--O--O--
| | | | | |
lps.normalize_()
lps.gate_(G, where)
G_expectation = lps.trace() Then start by implementing just the minimal functionality that achieves this (basically reverse back from the top level functionality). Having such a practical goal motivates the code and can be the initial unit test etc. I'm writing some docs up at the moment that describes in much greater detail how various bits of the quimb tensor stuff is designed that may be helpful, as I appreciate it might not be super clear at the moment! |
Yes, I have planned to have the LPS only store one of the two rows. In that way, the shape of the LPS resembles an MPO, but only one side of the open indices are physical indices (to which gates can be applied), and the other side are Kraus indices which may be of different size (c.f. MPOs where both sides are physical indices that should be of matching sizes, possibly permuted). So in a sense, the LPS is closer to an MPS, but with a Kraus index on each tensor, which is why I decided to extend Doing this should allow me to reuse the So when dealing with expectation values I also thought that something like
would just contract the Kraus indices between |
Ah OK yes that makes a lot more sense - essentially an MPS with ancillae index. I think it might be worth being explicit, e.g. including in the name, that this class itself is only half the LPS. I say that as it is not just the methods that define the TN object but also the tensors and network structure it contains. Thoughts on calling this object def to_lps(mpsa):
ket = mpsa
bra = mpsa.H
bra.site_ind_id = 'b{}'
return ket & bra etc. And methods that form this LPS intermediate can be explicitly named as such: |
Hi Johnnie, I'm happy to rename it. I've been slightly busy with other commitments so progress on this PR might be a bit slow, sorry. |
Following our discussion on Gitter, I've begun initial work on LPSs (also known as MPDOs). Still need to implement where
NotImplementedError
is raised, and I don't yet know how subsystem lognegativity works for LPSs.I believe the top-level module functions and the functions in
TensorNetwork1DVector
work without modifications, but obviously tests are still required for basically everything.