Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Autodifferentiation support for PytorchBackend #1276

Open
wants to merge 19 commits into
base: master
Choose a base branch
from

Conversation

Simone-Bordoni
Copy link
Contributor

@Simone-Bordoni Simone-Bordoni commented Mar 19, 2024

Complete pytorch backend, close issue #1266

Checklist:

  • Reviewers confirm new code works as expected.
  • Tests are passing.
  • Coverage does not decrease.
  • Documentation is updated.

@Simone-Bordoni Simone-Bordoni added the enhancement New feature or request label Mar 19, 2024
@Simone-Bordoni Simone-Bordoni self-assigned this Mar 19, 2024
@Simone-Bordoni Simone-Bordoni linked an issue Mar 19, 2024 that may be closed by this pull request
@renatomello renatomello added this to the Qibo 0.2.7 milestone Mar 19, 2024
@renatomello renatomello changed the title Autodifferentiation support for pytorch backend Autodifferentiation support for pytorch backend Mar 19, 2024
@renatomello renatomello changed the title Autodifferentiation support for pytorch backend Autodifferentiation support for PytorchBackend Mar 19, 2024
@Simone-Bordoni Simone-Bordoni marked this pull request as ready for review March 28, 2024 13:38
@Simone-Bordoni
Copy link
Contributor Author

The only differentiation method that was not working for the pytorch backend was the parameter shift rule.
However now i have unmuted the test and removed the check on the backend and everything is working correctly for the pytorch backend. Probably the problem was linked to some error in the initial implementation of the pytorch backend and was fixed by correcting all the other tests.

Copy link

codecov bot commented Mar 28, 2024

Codecov Report

Attention: Patch coverage is 98.24561% with 5 lines in your changes are missing coverage. Please review.

Project coverage is 99.75%. Comparing base (363a6e5) to head (c95a9ef).

Current head c95a9ef differs from pull request most recent head abd9382

Please upload reports for the commit abd9382 to get more accurate results.

Files Patch % Lines
src/qibo/backends/pytorch.py 88.23% 2 Missing ⚠️
src/qibo/gates/gates.py 85.71% 2 Missing ⚠️
...qibo/quantum_info/superoperator_transformations.py 96.87% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1276      +/-   ##
==========================================
- Coverage   99.82%   99.75%   -0.07%     
==========================================
  Files          72       72              
  Lines       10560    10645      +85     
==========================================
+ Hits        10541    10619      +78     
- Misses         19       26       +7     
Flag Coverage Δ
unittests 99.75% <98.24%> (-0.07%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@renatomello
Copy link
Contributor

renatomello commented Mar 28, 2024

The only differentiation method that was not working for the pytorch backend was the parameter shift rule. However now i have unmuted the test and removed the check on the backend and everything is working correctly for the pytorch backend. Probably the problem was linked to some error in the initial implementation of the pytorch backend and was fixed by correcting all the other tests.

There's also all the classes inside qibo.models.variational. They need to work properly with the optmizers. RIght now the tests are not passing, they are being skipped.

@Simone-Bordoni
Copy link
Contributor Author

The only differentiation method that was not working for the pytorch backend was the parameter shift rule. However now i have unmuted the test and removed the check on the backend and everything is working correctly for the pytorch backend. Probably the problem was linked to some error in the initial implementation of the pytorch backend and was fixed by correcting all the other tests.

There's also all the classes inside qibo.models.variational. They need to work properly with the optmizers. RIght now the tests are not passing, they are being skipped.

ok, I forgot these tests, I will try to fix them tomorrow

@scarrazza scarrazza modified the milestones: Qibo 0.2.7, Qibo 0.2.8 Apr 5, 2024
@Simone-Bordoni Simone-Bordoni marked this pull request as draft April 16, 2024 09:44
@Simone-Bordoni
Copy link
Contributor Author

I have encountered a problem while coding the SGD training procedure of the VQE for the pytorch backend.
The problem is that when executing a circuit with parametrized gates that require gradients the parameters of the gates are initially casted as numpy and this is not compatible with the gradients (see failed test).
Fixing the issue requires modifying numpy_matrices.py, as this module is of fundamental importance @renatomello @BrunoLiegiBastonLiegi @MatteoRobbiati maybe we can discuss how to modify it in order to make it work correctly with torch.

@BrunoLiegiBastonLiegi
Copy link
Contributor

Do you mean this cast?
https://github.com/qiboteam/qibo/blob/7b4489e6c95e5844796e8c08b6dbaacc9e28984e/src/qibo/backends/npmatrices.py#L76C1-L76C72

Because this is overloaded by the TorchMatrices object I believe.

@Simone-Bordoni
Copy link
Contributor Author

Simone-Bordoni commented Apr 17, 2024

Do you mean this cast? https://github.com/qiboteam/qibo/blob/7b4489e6c95e5844796e8c08b6dbaacc9e28984e/src/qibo/backends/npmatrices.py#L76C1-L76C72

Because this is overloaded by the TorchMatrices object I believe.

That cast is fine. The problem is in the two lines before:

cos = self.np.cos(theta / 2.0) + 0j
isin = -1j * self.np.sin(theta / 2.0)

when theta is a torch tensor with requires_grad = True you can't compute cos/sin using numpy.

@BrunoLiegiBastonLiegi
Copy link
Contributor

Do you mean this cast? https://github.com/qiboteam/qibo/blob/7b4489e6c95e5844796e8c08b6dbaacc9e28984e/src/qibo/backends/npmatrices.py#L76C1-L76C72
Because this is overloaded by the TorchMatrices object I believe.

That cast is fine. The problem is in the two lines before: cos = self.np.cos(theta / 2.0) + 0j isin = -1j * self.np.sin(theta / 2.0) when theta is a torch tensor with requires_grad = True you can't compute cos/sin using numpy.

but then why isn't self.np = torch for TorchMatrices as it is for the TorchBackend?

@Simone-Bordoni
Copy link
Contributor Author

Do you mean this cast? https://github.com/qiboteam/qibo/blob/7b4489e6c95e5844796e8c08b6dbaacc9e28984e/src/qibo/backends/npmatrices.py#L76C1-L76C72
Because this is overloaded by the TorchMatrices object I believe.

That cast is fine. The problem is in the two lines before: cos = self.np.cos(theta / 2.0) + 0j isin = -1j * self.np.sin(theta / 2.0) when theta is a torch tensor with requires_grad = True you can't compute cos/sin using numpy.

but then why isn't self.np = torch for TorchMatrices as it is for the TorchBackend?

I can try to fix things this way. With Renato we just wanted to have the matrices defined with numpy and then casted to the correct backend.

@renatomello
Copy link
Contributor

Do you mean this cast? https://github.com/qiboteam/qibo/blob/7b4489e6c95e5844796e8c08b6dbaacc9e28984e/src/qibo/backends/npmatrices.py#L76C1-L76C72
Because this is overloaded by the TorchMatrices object I believe.

That cast is fine. The problem is in the two lines before: cos = self.np.cos(theta / 2.0) + 0j isin = -1j * self.np.sin(theta / 2.0) when theta is a torch tensor with requires_grad = True you can't compute cos/sin using numpy.

but then why isn't self.np = torch for TorchMatrices as it is for the TorchBackend?

This solution should be fine

@scarrazza scarrazza modified the milestones: Qibo 0.2.8, Qibo 0.2.9 May 23, 2024
@Simone-Bordoni Simone-Bordoni marked this pull request as ready for review May 29, 2024 09:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Gradients for Pytorch backend
4 participants