Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[help wanted] Just how much vram is needed to enable the option for OpenAI's larger model? #125

Open
ScaryMonsterr opened this issue Jan 31, 2022 · 1 comment

Comments

@ScaryMonsterr
Copy link

ScaryMonsterr commented Jan 31, 2022

Got a 3090 here. Attempts to train and immediately get an error coming from PyTorch (tried both 256 and 512 image sizes, but don't think that makes a difference in this case) ::

RuntimeError: CUDA out of memory. Tried to allocate 260.00 MiB (GPU 0; 24.00 GiB total capacity; 21.16 GiB already allocated; 0 bytes free; 21.44 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Should I do what the runtime error suggests- or would that introduce stability problems?

@illtellyoulater
Copy link

I managed to run the larger model using --num_cutouts 24 (default value is 128) and leaving the size untouched.
I'm using a GPU which equips 12 GB of VRAM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants