Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added power function to backend to new backends #889

Open
wants to merge 16 commits into
base: master
Choose a base branch
from

Conversation

LuiGiovanni
Copy link
Contributor

This PR implements the power function specifically to symmetric and PyTorch backends with their respective test.

The changes were implemented in the files:

tensornetwork/backends/pytorch/*
tensornetwork/backends/symmetric/*

This is in regards to the issue #840.

@google-cla google-cla bot added the cla: yes label Dec 18, 2020
@LuiGiovanni
Copy link
Contributor Author

@mganahl @alewis Hello! here is the new PR where you may find the new changes and the tensordot issue. I await your feedback and thank you for the oportunity

@LuiGiovanni
Copy link
Contributor Author

Hello @alewis @mganahl! Running the test from pylint seems to cause and issue which I cannot resolve, it states that Python does not exist in module tensorflow, I assume the issue is init.py thought updating it with a small change such as adding a . does not seem to solve the issue, also my pylint returns a 10/10 both locally and here in GitHub's CI, the difference being that I do not get this issue locally.

In my search I found that this issue is not some enclosed issue, I found some people raising this issue as well in other places here are some examples:

I will continue my search for a solution, my main problem being that we can't resolve the tensordot issue without resolving this first, I would love some help or feedback. Thank you so much in advance for all the help and support you have given me!


def power(self, a: Tensor, b: Tensor) -> Tensor:
"""
Returns the power of tensor a to the value of b.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the wording of this docstring seems somewhat imprecise. Can you fix this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"""
Returns the exponentiation of tensor a raised to b.
In the case b is a tensor, then the power is by element
with a as the base and b as the exponent.

Args:
  a: The tensor that contains the base.
  b: The tensor that contains a single scalar.
"""

This is the change I made to it, once we resolve the other request I will push the changes


def power(self, a: Tensor, b: Tensor) -> Tensor:
"""
Returns the power of tensor a to the value of b.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The __pow__ method of BlockSparseTensor does currently not support broadcasting of any type, hence b has to be a scalar. Pls modify the docstring accordingly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit confused with this request since this test works fine for me with no failures could you elaborate on what the problem is, please?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason the tests passes is due to a bug in the test (I had actually missed that in the earlier reviewa, see below).
in numpy, a**b below does different things depending on how a and b look like. Generally, numpy applies broadcasting (https://numpy.org/doc/stable/user/basics.broadcasting.html) rules when computing this expression. Typically, this approach is relatively widely used by other libraries as well (e.g. tensorflow or pytorch). Supporting broadcasting forBlockSparseTensor is however more complicated because this class has some non-trivial internal structure that needs to be respected by such operations. Currently we just don't support it. The only case supported by BlockSparseTensor is when b is a scalar-type object.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pls add some test ensuring that a and b are BlockSparseTensors

R = 4
backend = symmetric_backend.SymmetricBackend()
a = get_tensor(R, num_charges, dtype)
b = BlockSparseTensor.random(a.sparse_shape)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pls modify the test so that b is a numpy-scalar

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

backend.power should only take BlockSparseTensor objects, pls fix.

Copy link
Collaborator

@mganahl mganahl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks again for the PR! Left some comments to be fixed. Thanks!


def power(self, a: Tensor, b: Tensor) -> Tensor:
"""
Returns the power of tensor a to the value of b.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason the tests passes is due to a bug in the test (I had actually missed that in the earlier reviewa, see below).
in numpy, a**b below does different things depending on how a and b look like. Generally, numpy applies broadcasting (https://numpy.org/doc/stable/user/basics.broadcasting.html) rules when computing this expression. Typically, this approach is relatively widely used by other libraries as well (e.g. tensorflow or pytorch). Supporting broadcasting forBlockSparseTensor is however more complicated because this class has some non-trivial internal structure that needs to be respected by such operations. Currently we just don't support it. The only case supported by BlockSparseTensor is when b is a scalar-type object.


def power(self, a: Tensor, b: Tensor) -> Tensor:
"""
Returns the power of tensor a to the value of b.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pls add some test ensuring that a and b are BlockSparseTensors

R = 4
backend = symmetric_backend.SymmetricBackend()
a = get_tensor(R, num_charges, dtype)
b = BlockSparseTensor.random(a.sparse_shape)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

backend.power should only take BlockSparseTensor objects, pls fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants