Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updating superop module tests allowing rectangular channels/channels with tracing representation #1491

Closed

Conversation

MrRobot2211
Copy link
Contributor

@MrRobot2211 MrRobot2211 commented Apr 7, 2021

Description
Making superop module tests compliant with the general pytest scheme. While doing that I found out some left TODO points. This took me to change code in the generalized kraus calculation to use SVD decomposition of rectangular matrices accurately.
Still there is a cryptic TODO referring to ptrace, the meaning of which I can not guess. Maybe someone with more experience on the module can be of help.

Changelog

  • Converted superop_rep tests to pytest
  • Changed code in generalized_kraus to allow expansion of rectangular operators.

@coveralls
Copy link

coveralls commented Apr 7, 2021

Coverage Status

Coverage decreased (-0.04%) to 68.3% when pulling b1a030c on MrRobot2211:update_test_superop_tensor into b71625e on qutip:master.

@MrRobot2211 MrRobot2211 changed the title Updating superop and tensor module tests Updating superop module tests allowing rectangular channels/channels with tracing representation Apr 10, 2021
@MrRobot2211 MrRobot2211 marked this pull request as ready for review April 10, 2021 13:28
@MrRobot2211
Copy link
Contributor Author

I can confirm that the failure in the CI occurs only on the coverage test environment and it disappears if we update scipy to 1.6

@jakelishman
Copy link
Member

That environment deliberately tests against Scipy 1.4 because we still officially support it. This looks like a true test failure - we need to know what's causing it, and most likely fix it in code, not in the test suite.

@MrRobot2211
Copy link
Contributor Author

Everything else works correctly the issue appears when going to high dimensions only, 8 to be clear. I tried myself on other environments and as long as scipy is changed the instability in 8 dimensions disappears, I do not imagine what type of error can appear only in big dimensions and in a specific scipy version other than a numeric stability in the scipy version itself. Do you have any guess?

@MrRobot2211
Copy link
Contributor Author

MrRobot2211 commented Apr 11, 2021

To illustrate the point further for dimension 8 the resulting superoperator has dims [[[8, 8], [8, 8]], [[8, 8], [8, 8]]] this amounts to 16777216 numbers. If the differences in each is of the order of 1e-8 this test will not pass, as the resulting total error will be well above 1e-5.

This difference is not seen in scipy versions :

  • 1.5
  • 1.6
  • 1.6.2

Going deeper when running the 8 dimension tests in scipy 1.4 set to verbose this error is thrown

Intel MKL ERROR: Parameter 12 was incorrect on entry to ZHBRDB.

looking at it in stackoverflow
https://stackoverflow.com/questions/54314529/mkl-error-parameter-12-for-large-matrices-with-scipy-linalg-eigvalsh-in-an

and then looking at the PR history in scipy I found this

scipy/scipy#11304

which refers to this issue in particular

scipy/scipy#8205

which seems in consonance with my finding.

Scipy 1.4 has issues in its linalg.eigh() API Iam surprised this is not producing errors elsewhere. Specially when calculating the tracenorm.

@MrRobot2211
Copy link
Contributor Author

Hi @jakelishman do you have any further doubts about the issues with scipy 1.4 linallg.eigenh() API ? To me it is clear that we have to deprecate support for it, as any code that relies on calculating eigenvalues for large matrices has this MKL error, which it often does not show, and instead of terminating returns an incorrect numeric value. This affects not just this code but any possible high dimensional calculation.

@jakelishman
Copy link
Member

Thanks for looking deeply into it. It's good to know the cause.

We shouldn't remove support for SciPy 1.4 unless we absolutely have to. Most users won't use matrices that large so won't be affected, and SciPy 1.5 is only a year old which is too recent to be a requirement in an academic setting. We should maintain at least a 2-year dependency window (like NumPy). Any constraints we make on allowable versions affect any packages downstream of us as well, so we want to try and stay as permissive as possible, as long as there's not new features that we absolutely must have.

We already have mechanisms for working around an unstable eigh implementation because of problems with mac OpenBLAS zheevr in some cases, so we can add in this additional test when setting eigh_unsafe in our initialisation. The principle is that we decay to using the general-purpose eig, and include a specific orthonormalisation step to stabilise the eigensystem afterwards. This code is already in qutip/sparse.py, so it shouldn't be too hard to add an extra condition in qutip/__init__.py.

Could you also test if the issue is also confined to MKL, or if it persists in OpenBLAS with SciPy 1.4 as well? We lose precision when swapping down to eig in place of eigh, so it's good to confine the switch to the minimum known-bad set.

@MrRobot2211
Copy link
Contributor Author

HI Jake than you for the response.
I'll open a separate PR with just the update for the failing test and the changes in __init__.py if you are ok with it.
With respect to the check in OpenBLAS with SciPy 1.4, yes I can try it on a linux and a macOS env just in case.

@MrRobot2211
Copy link
Contributor Author

@jakelishman There were no issues on OpenBlas. I opened a separate PR with the changes you suggested and the failing test.

 Dropped superchoichi 8 dimension parameter test
@MrRobot2211
Copy link
Contributor Author

Hi @jakelishman I would apreciate it if you could review this. Should we add something in reference to #1160 ?

Copy link
Member

@jakelishman jakelishman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to go and refresh myself on a whole bunch of superoperator representation theory to accurately comment on the physics, but it seems to me like something unphysical is happening here?

In your "rectangular" test, you actually construct a square channel. However, what you've actually done is create a superoperator mapping a non-square operator to another, but that's disguised by some off-colour usage of tensor_contract. The matrix you actually construct is proportional to

qutip.sprepost(qutip.qeye(n), qutip.qeye(m))

for n != m, which is a map of non-square operators to non-square operators. I don't know what that represents physically, because those operators don't map states in a Hilbert space to the same space.

I then don't know what it means to ask for the Choi matrix of a mapping C^(n,m) -> C^(n,m) - the definition I know is only valid for channels of the form C^(n,n) -> C^(m,m) (i.e. channels mapping operators on one Hilbert space to operators on a (potentially different) Hilbert space). QuTiP returns a non-square matrix, which also doesn't mesh with my understanding of Choi matrices.

That said, this isn't my area of expertise by a long stretch, so do tell me if I'm misunderstanding something. If this is reasonable, we at the very least need some major tests that explicitly compare known values for the Kraus, Choi and Stinespring representations of these sorts of operators.

qutip/superop_reps.py Outdated Show resolved Hide resolved
qutip/superop_reps.py Show resolved Hide resolved
qutip/tests/test_superop_reps.py Outdated Show resolved Hide resolved
qutip/tests/test_superop_reps.py Outdated Show resolved Hide resolved
qutip/tests/test_superop_reps.py Outdated Show resolved Hide resolved
qutip/tests/test_superop_reps.py Outdated Show resolved Hide resolved
qutip/tests/test_superop_reps.py Outdated Show resolved Hide resolved
qutip/tests/test_superop_reps.py Outdated Show resolved Hide resolved
qutip/tests/test_superop_reps.py Outdated Show resolved Hide resolved
qutip/tests/test_superop_reps.py Outdated Show resolved Hide resolved
@jakelishman
Copy link
Member

jakelishman commented Apr 26, 2021

I've just merged a change to master that causes a couple of minor merge conflicts with this. There's no "real" conflict - it's just that the todense calls got replaced with toarray, so it'll hopefully be easy to fix up. No functionality that you're touching changed in any way.

Any tests you're writing will now need to make sure they're not using todense though, or they'll fail.

allowed test_choit_tr to use dimension fixture
@MrRobot2211
Copy link
Contributor Author

Thank you for the heads up. Just in case I am missing something here, I pulled from master and left a todense() on purpose to see how it failed, but it doesn't.
See https://github.com/MrRobot2211/qutip/blob/f3e0651a7b203841b4ae4ededce2d5b0524abb82/qutip/tests/test_superop_reps.py#L252

@MrRobot2211
Copy link
Contributor Author

@jakelishman , I think it is ready for a review.

There a lot of TODOs in the code from the before times, and a general refactor is probably needed since most of the representations are working on the assumption that the channels admit equal right and left tensors:
formula .
This PR at least allows for different left and right tensor in the Stinespring representation, thus completing the FIXME in the original tests.

and homogenized among all tests
@quantshah
Copy link
Member

I ended up here while searching for how to apply the Choi matrix to an input state and get the output density matrix. The old code has several TODOs (https://qutip.org/docs/latest/apidoc/functions.html#module-qutip.superop_reps) and we do not have a method to act on a Qobj with a Choi matrix. If you have the Choi representation of a quantum channel and want to apply it to an input state, it is done in the following way (Aamir Anis and A I Lvovsky 2012 New J. Phys. 14 105021):

Screen Shot 2021-05-25 at 12 14 49 PM

I guess we can open a new issue to discuss these additions. @MrRobot2211 Good work on this. Could you please update according to the changes requested by @jakelishman?

@jakelishman
Copy link
Member

jakelishman commented May 25, 2021

@quantshah: Simon and I had temporarily stepped back from this one at the time, because we weren't entirely confident that the physics was being handled correctly - the initial version of the PR certainly had invalid physics, and it was hard to evaluate the new stuff from that perspective. It's changed since then, and it might be ok now, but it'll be easier to re-review now we've cleared the old context from our minds. About your Choi matrices - right now, Qobj doesn't have an act method, but that could well be a useful addition. It does have Qobj.__call__ which does a similar thing in a very few cases, so we could properly beef that up, but we might want to discuss exactly what spec it should have.

@MrRobot2211: I very quickly glanced through the diff of this, but there still seem to be several points where I asked for changes that haven't been changed: things like removing unnecessary calls to np.array, and explaining why dimensions needed to be changed in some tests. Can you go back through the PR and make sure you've addressed everything? Also, since you're trying to add new functionality, please make sure you add tests of explicit, analytically known cases against the whole matrix, in addition to the properties like the dimensions (also asked above). In a super ideal world it'd be great if you could find a published reference for those tests, but if the maths is simple enough that we can verify it by hand, then showing us would be ok.

It could be good practice for the rest of GSoC if you try and go through the diff yourself to spot places that might want clarifying, and fix them ahead of review.

@quantshah
Copy link
Member

Thanks @jakelishman. Let me open a new issue discussing the application of choi matrices and a possible Qobj.act_. I thought the way this would work was overriding the __mul__ operator and looking for the type choi such that something like E*rho works out of the box. But this is wrong since the Hilbert space dimensions of the second subsystem needs to be specified (usually it is the same for trace preserving processes). There are other quirks to it so better to discuss in a new therad. Thanks for the quick reply on this.

Copy link
Contributor Author

@MrRobot2211 MrRobot2211 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @jakelishman thank you for the review. I went through and checked all the comments. The only thing that was not quite well were the to_array calls, I just corrected one that was still present in the master branch and some that were left over from my refactoring.

Regarding dimension changes for most there is no good reason with the exception that we can use the same fixture for all tests as long as the original range of dimensions tested was covered. But I might be missing something here. There are some tests in which running for larger dimensions is not feasible in a reasonable amount of time, like test_SuperChoiChiSuper, or where only even dimensions are accepted, and they get an special set of dimensions. Aditionally the original PR issue was mainly that the contraction was not being done on the proper index gettind non-squared operators. But I am not sure if this was what you were pointing at.

I''ll get into manually calculating examples to test whole matrix matching, maybe I can pull some example from Watrous' book.

Thank you for the last tip.

Comment on lines 58 to 59
tol = 1e-9
tol = 1e-7
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

II assumed you were referring to the tolerance here. I changed it back on a later commit, namely 4735352

assert_(choi_matrix.type == "super" and choi_matrix.superrep == "choi")
assert_(chi_matrix.type == "super" and chi_matrix.superrep == "chi")
assert_(test_supe.type == "super" and test_supe.superrep == "super")
assert (test_supe - superoperator).norm() < 1e-5
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was homogeneized in a later commit to the original tolerance.

Copy link
Contributor

@hodgestar hodgestar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MrRobot2211 I reviewed the code and it looks mostly good now. I've left a few suggestions and questions. Let me know if you have some time to finish this off before QuTiP 4.7 is released in a month-ish.

@quantshah Could you do a review of the mathematics and general superoperator usage here? You're probably much more familiar with it than me.

qutip/superop_reps.py Outdated Show resolved Hide resolved
qutip/superop_reps.py Outdated Show resolved Hide resolved
U = U[:, nonzero_idxs]
S = sqrt(S[nonzero_idxs])

dK = np.count_nonzero(S > thresh)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm just checking why this code was changed. Yes, svd returns the singular values in descending order, but what was wrong with previous code that didn't rely on that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mmm... after looking at it the changed code here gives exactly the same result. I expect that indexing in this way instead of using a mask will be faster, but I will run some tests and check it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes in fact it runs faster

Comment on lines 67 to 71
@pytest.fixture(scope="function", params=[
pytest.param(rand_super, id="super")
])
def superoperator(request, dimension):
return request.param(dimension)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is quite a strange function to read given that it could have just been:

@pytest.fixture
def superoperator(dimension):
    return rand_super(dimension)

Maybe there was a plan to do something more? Could we either add another param or use the simpler implementation?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that the idea was to draw from alternative superop distributions like rand rand_super_bcsz, rand_dm_ginibre . Do you think this would be nice to make tests more thorough, or that it just adds time and is not increasing coverage substantially?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think adding the alternative distributions is great if the individual tests are not too slow.

Comment on lines +267 to +268
# FIXME: problem if Kraus index is implicitly
# ptraced!
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain the FIXME a bit more? Was this an existing issue? Can we just fix it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this was an existing issue. I will check if the issue persists.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I re-read Watrous' book's chapter on channel representation and I still can not pinpoint exactly what this refers to. I am sure I am missing something. Maybe @quantshah can help me here.

@MrRobot2211
Copy link
Contributor Author

MrRobot2211 commented Dec 15, 2021 via email

@hodgestar
Copy link
Contributor

@MrRobot2211 Thanks! Drop a comment here when you'd like me to take another look.

MrRobot2211 and others added 3 commits January 11, 2022 21:46
Co-authored-by: Simon Cross <hodgestar+github@gmail.com>
Co-authored-by: Simon Cross <hodgestar+github@gmail.com>
superoperator generator bcz
@MrRobot2211
Copy link
Contributor Author

@hodgestar If you want you can take another look. There is a pending solution for the TODO comment but maybe @quantshah can help us with that.

@hodgestar hodgestar added this to the QuTiP 4.7 milestone Feb 9, 2022
Copy link
Member

@Ericgig Ericgig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test improvement looks good.

The support for rectangular channel seems fine, but not really needed.
And there is an implicit expectation that: prod(out_left) == prod(in_right) and prod(in_left) == prod(out_right) in _generalized_kraus.

I would accept it to be merged as is. But I would prefer to merge only the parametrization of the tests for 4.7 and wait for the rest.

@hodgestar
Copy link
Contributor

I've opened up #1825 which has just the test updates from this PR (as @Ericgig suggsted).

@MrRobot2211 I'm going to close this PR, but once @Ericgig has finished the new dims support PR, I think you could revisit adding this to dev.major -- i.e. QuTiP 5 -- if you are up for it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants