Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when doing CUDA Conv2d with 1x1 kernel. #547

Closed
NgPDat opened this issue Jan 22, 2017 · 3 comments
Closed

Error when doing CUDA Conv2d with 1x1 kernel. #547

NgPDat opened this issue Jan 22, 2017 · 3 comments

Comments

@NgPDat
Copy link

NgPDat commented Jan 22, 2017

Conv2d with 1x1 kernel is not working on GPU, although it works fine on CPU:

net = nn.Conv2d(1, 6, kernel_size=(1,1))
net.cuda()
x = Variable(torch.randn(1, 1, 100, 100))
x.cuda()
net(x)

Error message:

TypeError: FloatSpatialConvolutionMM_updateOutput received an invalid combination of arguments - got (int, torch.FloatTensor, torch.FloatTensor, torch.cuda.FloatTensor, torch.cuda.FloatTensor, torch.FloatTensor, torch.FloatTensor, int, int, int, int, int, int), but expected (int state, torch.FloatTensor input, torch.FloatTensor output, torch.FloatTensor weight, [torch.FloatTensor bias or None], torch.FloatTensor finput, torch.FloatTensor fgradInput, int kW, int kH, int dW, int dH, int padW, int padH)

I tried disabling cudnn with torch.backends.cudnn.enabled = False but still got the same error message.

I use Ubuntu 14.04, Cuda 7.5, Cudnn 5.1.5, Python 3.5.2, and Pytorch is installed from binaries.

@apaszke
Copy link
Contributor

apaszke commented Jan 23, 2017

If you look closely at the argument types that were given to conv, you'll see that some of the tensors are torch.cuda.FloatTensors, while the others are torch.FloatTensors. You probably forgot to send the input to the GPU.

@apaszke apaszke closed this as completed Jan 23, 2017
@colesbury
Copy link
Member

colesbury commented Jan 23, 2017

To clarify, instead of:

x = Variable(torch.randn(1, 1, 100, 100))
x.cuda()  # This creates a copy on the GPU and immediately discards it. "x" is still on the CPU

You should write:

x = Variable(torch.randn(1, 1, 100, 100).cuda())

@jdhao
Copy link

jdhao commented Oct 31, 2017

I think it it better to make model.cuda() and x.cuda() behaves consistently to avoid confusion.

zou3519 pushed a commit to zou3519/pytorch that referenced this issue Mar 30, 2018
ashishfarmer pushed a commit to ashishfarmer/pytorch that referenced this issue Mar 16, 2020
KyleCZH pushed a commit to KyleCZH/pytorch that referenced this issue Sep 20, 2021
pytorchmergebot pushed a commit that referenced this issue Jun 27, 2022
Summary:
X-link: pytorch/data#547

Fixes pytorch/data#538
- Improve the validation function to raise warning about unpickable function when either lambda or local function is provided to DataPipe.
- The inner function from functools.partial object is extracted as well for validation
- Mimic the behavior of pickle module for local lambda function: It would only raise Error for the local function rather than lambda function. So, we will raise warning about local function not lambda function.
```py

>>> import pickle
>>> def fn():
...     lf = lambda x: x
...     pickle.dumps(lf)
>>> pickle.dumps(fn)
AttributeError: Can't pickle local object 'fn.<locals>.<lambda>'
```

This Diff also fixes the Error introduced by #79344

Test Plan:
CI on PyTorch and TorchData
Manually validated the tests from TorchVision

Differential Revision: D37417556

Pull Request resolved: #80232
Approved by: https://github.com/NivekT
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants