Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High memory usage in inference. #58

Open
toomanycats opened this issue Jan 3, 2024 · 1 comment
Open

High memory usage in inference. #58

toomanycats opened this issue Jan 3, 2024 · 1 comment

Comments

@toomanycats
Copy link

I've found that when running the Singularity container version of the DISCO pipeline, we needed 32 GB of memory for our
Sun Grid Engine to run the pipeline.

I made a sandboxed version of the sing container and added a cache clear on a hunch. This appears to have worked.
Still double checking.

def inference(T1_path, b0_d_path, model, device):
+    torch.cuda.empty_cache()
    # Eval mode
     model.eval()
@toomanycats
Copy link
Author

UPDATE:

The cache clearing didn't help. Plus the call is probably wrong since this the device is not cuda.
Attempting another idea, to explicitly use float16 datatype rather than what we think is the default, float32.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant