Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Initial LPS work #60

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

WIP: Initial LPS work #60

wants to merge 1 commit into from

Conversation

AidanGG
Copy link
Contributor

@AidanGG AidanGG commented May 19, 2020

Following our discussion on Gitter, I've begun initial work on LPSs (also known as MPDOs). Still need to implement where NotImplementedError is raised, and I don't yet know how subsystem lognegativity works for LPSs.

I believe the top-level module functions and the functions in TensorNetwork1DVector work without modifications, but obviously tests are still required for basically everything.

@pep8speaks
Copy link

Hello @AidanGG! Thanks for opening this PR. We checked the lines you've touched for PEP 8 issues, and found:

Line 2366:80: E501 line too long (81 > 79 characters)
Line 2367:80: E501 line too long (80 > 79 characters)
Line 2387:5: E303 too many blank lines (2)
Line 2390:80: E501 line too long (87 > 79 characters)
Line 2494:80: E501 line too long (83 > 79 characters)
Line 2524:80: E501 line too long (86 > 79 characters)
Line 2617:80: E501 line too long (80 > 79 characters)
Line 2916:80: E501 line too long (82 > 79 characters)
Line 2933:80: E501 line too long (87 > 79 characters)

@codecov
Copy link

codecov bot commented May 19, 2020

Codecov Report

Merging #60 into develop will decrease coverage by 1.76%.
The diff coverage is 14.64%.

Impacted file tree graph

@@             Coverage Diff             @@
##           develop      #60      +/-   ##
===========================================
- Coverage    86.23%   84.47%   -1.77%     
===========================================
  Files           32       32              
  Lines         8684     8882     +198     
===========================================
+ Hits          7489     7503      +14     
- Misses        1195     1379     +184     
Impacted Files Coverage Δ
quimb/tensor/tensor_1d.py 73.94% <14.64%> (-9.92%) ⬇️
quimb/tensor/optimize_pytorch.py 0.00% <0.00%> (-75.33%) ⬇️
quimb/evo.py 99.20% <0.00%> (+<0.01%) ⬆️
quimb/linalg/slepc_linalg.py 93.51% <0.00%> (+1.17%) ⬆️
quimb/tensor/optimize_autograd.py 87.43% <0.00%> (+50.26%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 74f946f...25c25a4. Read the comment docs.

@jcmgray
Copy link
Owner

jcmgray commented May 19, 2020

Hi @AidanGG. Possibly I am missing something but a bit confused about what's been added here. The LPS is a double row 1D operator (i.e. shouldn't mixin methods from TensorNetwork1DVector or TensorNetwork1DFlat), whereas from what I can tell this is maybe something like an MPO.

I think what would be helpful would be an example of the top level functionality and design that would be useful to you. E.g.

lps = qtn.LocallyPurifiedState.rand(100, bond_dim=8)
lps.show()
    
    |  |  |  |  |  |  
    O--O--O--O--O--O--
    |  |  |  |  |  |     ... etc
    O--O--O--O--O--O--
    |  |  |  |  |  |  

lps.normalize_()
lps.gate_(G, where)
G_expectation = lps.trace()

Then start by implementing just the minimal functionality that achieves this (basically reverse back from the top level functionality). Having such a practical goal motivates the code and can be the initial unit test etc.

I'm writing some docs up at the moment that describes in much greater detail how various bits of the quimb tensor stuff is designed that may be helpful, as I appreciate it might not be super clear at the moment!

@AidanGG
Copy link
Contributor Author

AidanGG commented May 20, 2020

Yes, I have planned to have the LPS only store one of the two rows. In that way, the shape of the LPS resembles an MPO, but only one side of the open indices are physical indices (to which gates can be applied), and the other side are Kraus indices which may be of different size (c.f. MPOs where both sides are physical indices that should be of matching sizes, possibly permuted). So in a sense, the LPS is closer to an MPS, but with a Kraus index on each tensor, which is why I decided to extend TensorNetwork1DVector.

Doing this should allow me to reuse the gate_TN_1D without any major modifications. When a gate is applied to a single sided LPS, the whole density matrix is transformed correctly when contracting over the Kraus indices with its conjugate.

So when dealing with expectation values I also thought that something like

expec_TN_1D(lps, mpo1, mpo2, lps.H)

would just contract the Kraus indices between lps and lps.H.

@jcmgray
Copy link
Owner

jcmgray commented May 20, 2020

Ah OK yes that makes a lot more sense - essentially an MPS with ancillae index. I think it might be worth being explicit, e.g. including in the name, that this class itself is only half the LPS. I say that as it is not just the methods that define the TN object but also the tensors and network structure it contains.

Thoughts on calling this object MPSAncillae or something? Then the LPS could be formed (if necessary) as :

def to_lps(mpsa):
    ket = mpsa
    bra = mpsa.H
    bra.site_ind_id = 'b{}'
    return ket & bra

etc. And methods that form this LPS intermediate can be explicitly named as such: mpsa.to_dense_lps().

@AidanGG
Copy link
Contributor Author

AidanGG commented May 25, 2020

Hi Johnnie, I'm happy to rename it. I've been slightly busy with other commitments so progress on this PR might be a bit slow, sorry.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants