Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA out of memory. #217

Open
wenyuhaokikika opened this issue Nov 4, 2022 · 2 comments
Open

CUDA out of memory. #217

wenyuhaokikika opened this issue Nov 4, 2022 · 2 comments

Comments

@wenyuhaokikika
Copy link

I had this problem when running the embed of bio_embedding,

ERROR:bio_embeddings.embed.embedder_interfaces:Error processing batch of 3 sequences: CUDA out of memory. Tried to allocate 972.00 MiB (GPU 1; 7.80 GiB total capacity; 4.91 GiB already allocated; 717.31 MiB free; 4.92 GiB reserved in total by PyTorch). You might want to consider adjusting the `batch_size` parameter. Will try to embed each sequence in the set individually on the GPU.

image

Although the final result is calculated, I am not sure if it calculated it correctly.
Is there any option that can be set to avoid this, e.g. reduce batch_size size, use multiple GPU operations.
I did not find the relevant options in ````examples/parameters_blueprint.yml```

@zff1116
Copy link

zff1116 commented Nov 4, 2022

I seem to have the same problem...

@fedorn
Copy link

fedorn commented Nov 11, 2022

EmbedderInterface.embed_many has a batch_size argument, but it doesn't use batching by default and processes each sentence individually.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants