Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch.cuda.OutOfMemoryError: CUDA out of memory #57

Open
panxkun opened this issue Mar 17, 2023 · 1 comment
Open

torch.cuda.OutOfMemoryError: CUDA out of memory #57

panxkun opened this issue Mar 17, 2023 · 1 comment

Comments

@panxkun
Copy link

panxkun commented Mar 17, 2023

Hi, this work is amazing. But I have encountered a strange problem. On desktop computer with 2080 8g gpu, it works well. However, when I run it on the server with 3090 24G gpu, it crashed with oom: I think it maybe the problem with the version of cuda and pytorch. so I kept the environment on server same with it on desktop. It still cannot run normally. Do you have any suggestion?

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 4.77 GiB (GPU 0; 23.70 GiB total capacity; 19.12 GiB already allocated; 3.93 GiB free; 19.13 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

@kts707
Copy link

kts707 commented Mar 26, 2023

Hi, I also encountered the same issue with RTX 3090 GPU when I tried to run the training for LLFF datasets. Have you resolved this issue? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants