Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2080Ti RuntimeError: CUDA out of memory #39

Open
cn0xroot opened this issue Dec 8, 2021 · 7 comments
Open

2080Ti RuntimeError: CUDA out of memory #39

cn0xroot opened this issue Dec 8, 2021 · 7 comments

Comments

@cn0xroot
Copy link

cn0xroot commented Dec 8, 2021

model loaded: ./weights/paprika.pt
Traceback (most recent call last):
File "test.py", line 92, in
test(args)
File "test.py", line 48, in test
out = net(image.to(device), args.upsample_align).cpu()
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/init3/Tools/animegan2-pytorch/model.py", line 106, in forward
out = self.block_e(out)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 446, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 443, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDA out of memory. Tried to allocate 12.74 GiB (GPU 0; 10.76 GiB total capacity; 1.19 GiB already allocated; 7.09 GiB free; 2.52 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

input:
samples/inputs/1.jpg

@EuphoricPenguin
Copy link

This is happening on my 1650 as well. It looks like something is overshooting the maximum memory amount across the board.

  File "C:\Users\Mars\Documents\animegan2-pytorch\test.py", line 89, in <module>
    test(args)
  File "C:\Users\Mars\Documents\animegan2-pytorch\test.py", line 47, in test
    out = net(input, args.upsample_align).squeeze(0).permute(1, 2, 0).cpu().numpy()
  File "C:\Users\Mars\miniconda3\envs\animegan\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Mars\Documents\animegan2-pytorch\model.py", line 106, in forward
    out = self.block_e(out)
  File "C:\Users\Mars\miniconda3\envs\animegan\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Mars\miniconda3\envs\animegan\lib\site-packages\torch\nn\modules\container.py", line 141, in forward
    input = module(input)
  File "C:\Users\Mars\miniconda3\envs\animegan\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Mars\miniconda3\envs\animegan\lib\site-packages\torch\nn\modules\container.py", line 141, in forward
    input = module(input)
  File "C:\Users\Mars\miniconda3\envs\animegan\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Mars\miniconda3\envs\animegan\lib\site-packages\torch\nn\modules\conv.py", line 446, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "C:\Users\Mars\miniconda3\envs\animegan\lib\site-packages\torch\nn\modules\conv.py", line 442, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: CUDA out of memory. Tried to allocate 4.68 GiB (GPU 0; 4.00 GiB total capacity; 1.32 GiB already allocated; 294.20 MiB free; 2.38 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

@sunyclj
Copy link

sunyclj commented Dec 21, 2021

This is happening on my RTX6000 as well.if use cpu,very slow.

@InfiniteLife
Copy link

Same on RTX 2080 Ti. It eats so much memory

@OnlyToGo
Copy link

OnlyToGo commented Jan 5, 2022

Same on RTX 3060.

@uu9
Copy link

uu9 commented Mar 14, 2022

On a single 2080Ti, 640x640 input is OK, use 8~9 GiB memory
For a larger image, run it on CPU, take a few seconds

@zhengjiedna
Copy link

Same on RTX 3080.

@biepenghaomie
Copy link

I guess the size of image is so large,that make it use so much gpu memory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants