-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Misc]: Building docker container requires insane amount of memory #350
Comments
Where did you set the MAX_JOBS variable? It should be set in the Dockerfile right before the build command towards the end. |
I tried to set it at line 30 in the Dockerfile, but it still receives "Killed" by the OOM Killer. |
Perhaps it would be best to pull the aphrodite package from pypi instead of building it in the docker. |
I don't think the Aphrodite package supports custom-made AWS endpoints... |
with MAX_JOBS 2 compile ok with 64gb of ram |
Ah right this reminds me, @mrseeker , we build for all GPU architectures which may take more time and use more memory. You can try getting rid of the export for torch cuda arch list, that'll probably help. |
Found out that if I changed the arch list to just include the arch that I need, then it's scaling down to almost 90Gb when compiling... |
Anything you want to discuss about Aphrodite.
I am trying to build a custom version of Aphrodite, however during the build of the Aphrodite engine with docker I need an insane amount of Memory and CPU. Is there a way to reduce this?
I already tried setting "MAX_JOBS=1" but that did not help.
The text was updated successfully, but these errors were encountered: