Skip to content

Cuda out of memory #68

Answered by victorca25
3majsie asked this question in Q&A
Discussion options

You must be logged in to vote

Hello! Typically 1x models will consume more VRAM than 4x models if the same configuration is used (specially batch size and crop size).

If using the standard ESRGAN architecture, the recommendation here is to reduce either the batch size, the crop size or both, so the tensors fit in memory.

Alternatively, you can try to use the pixel-unshuffle wrapper, but that depends on how you plan to use the model after it's trained and if it that code supports using the models with the wrappers.

Replies: 2 comments

Comment options

You must be logged in to vote
0 replies
Answer selected by 3majsie
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants