You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
First of all, thank you so much for this amazing implementation! I am trying to use your code (the example code), but I am getting an error when I moved everything to CUDA.
device = torch.device("cuda")
# Create the sequences
batch_size, len_x, len_y, dims = 8, 15, 12, 5
x = torch.rand((batch_size, len_x, dims), requires_grad=True)
y = torch.rand((batch_size, len_y, dims))
# Transfer tensors to the GPU
x = x.to(device)
y = y.to(device)
# Create the "criterion" object
sdtw = SoftDTW(use_cuda=True, gamma=0.1)
# Compute the loss value
loss = sdtw(x, y) # Just like any torch.nn.xyzLoss()
# Aggregate and call backward()
loss.mean().backward()
If I print x.grad, the result is empty and I get the following warning message:
/usr/local/lib/python3.7/dist-packages/torch/_tensor.py:1083: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten/src/ATen/core/TensorBody.h:477.)
return self._grad
I'm running the code using Google Colab. Any idea why this is happening? Again thank you so much!
The text was updated successfully, but these errors were encountered:
Hi,
First of all, thank you so much for this amazing implementation! I am trying to use your code (the example code), but I am getting an error when I moved everything to CUDA.
If I print
x.grad
, the result is empty and I get the following warning message:I'm running the code using Google Colab. Any idea why this is happening? Again thank you so much!
The text was updated successfully, but these errors were encountered: