New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error in optimization process after pytorch upgrade #118
Comments
Would it be possible to post a minimal example that demonstrates the problem? In the meantime you could try one of ways |
I tried using a modified version of one of the quimb examples:
and from that I got
Does this mean that if you try to run pytorch using the GPU once, it sets that device choice as an environmental variable? If so, then I'm afraid I'm not clear on how to switch back to CPU. |
Yeah the problem seems to be that call import torch
x = torch.tensor(2.0, requires_grad=True, device='cuda')
y = torch.tensor(3.0, requires_grad=True, device='cuda')
z = torch.tensor( # same for as_tensor
[[x, y],
[y, 0]],
)
z.device, z.requires_grad
# (device(type='cpu'), False) need to look into how to get around this. |
Here's a workaround for now: import torch
import autoray as ar
def _nd_peek(x):
"""Return the first element, if any, of nested
iterable ``x`` that is a ``torch.Tensor``.
"""
if isinstance(x, torch.Tensor):
return x
elif isinstance(x, (tuple, list)):
for el in x:
res = _nd_peek(el)
if res:
return res
def _nd_stack(x, device):
"""Recursively stack ``x`` into ``torch.Tensor``,
creating any constant elements encountered on ``device``.
"""
if isinstance(x, (tuple, list)):
return torch.stack([_nd_stack(el, device) for el in x])
elif isinstance(x, torch.Tensor):
# torch element
return x
else:
# torch doesn't like you mixing devices
# so create constant elements correctly
return torch.tensor(x, device=device)
def torch_array(x):
"""Convert ``x`` into ``torch.Tensor`` respecting the device
and gradient requirements of scalars.
"""
# work out if we should propagate a device
any_torch_el = _nd_peek(x)
if any_torch_el is not None:
device = any_torch_el.device
else:
device = None
return _nd_stack(x, device)
ar.register_function('torch', 'array', torch_array) if you call this first, then it should work. |
Thank you very much for getting back to me about that. I tried that code in the toy example and it worked, but in my main program, I got the following error:
|
Again, its much harder to help with only the error traceback, but I suspect you might need to call: import autoray as ar
...
exp_array = ar.do('array', exp_vals) rather than try and call numpy on torch arrays. |
Sorry about that. I added the two lines that you suggested, but I got a different error:
My optimization function is defined as
while the loss function is defined as
and the expectation value calculation function is defined as
|
The key thing is that all numeric/array operations need to be dispatched to the correct backend library (in this case exp_array = ar.do('stack', exp_vals)
# exp_array = ar.do('array', exp_vals, like=backend_or_example_array) # should also work
return exp_array + bias |
What is your issue?
I recently upgraded to pytorch version 1.8.1+cu102, and now I am getting an error when I run the code that you have helped me with before:
I am not sure why, but it looks as though the optimizer is apply pytorch with GPU settings even though the device is set as CPU.
The text was updated successfully, but these errors were encountered: