-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: Is there a reason CUDA 6.1 is the minimum? Would CUDA 6.0 on the P100 not work? #413
Comments
It's mostly due to the QuIP# kernels. I'll look into extending support to P100s (we used to support them before) tomorrow. |
Ah I see. So for now it doesn't work only when using Quip# kernels? I was thinking if it was as easy as changing the setup.py and the other quantization would work then it's a non-issue. Just wanted to make sure if it will work at all or if there is a big change in aphrodite as a whole that makes it not work with P100s. I'm going to put together either a 4xP100 or 4xP40 system to test out the larger models and higher context size models that just came out, so I am just trying to make sure the stuff I want to run on them works first lol. The Tesla P100 are a great deal because they're 16GB cards that has over 2x the bandwidth of the P40 cards. Although if speed is no concern, I guess the P40 are a better deal with 24GBs. Currently Aphrodite is working great on my 2x3090 so thanks for your work on this project! |
I did try myself on the dev branch, but I'm waaaay out of my depth. I got it to build using the runtime and exporting TORCH_CUDA_ARCH_LIST="6.0 6.1 7.0 7.5 8.0 8.6 8.9 9.0+PTX" , but actually trying to load up a model results in "RuntimeError: CUDA error: no kernel image is available for execution on the device". As near as I understand, pytorch does still ship with kernels for the P100, though, so I'm unsure what's going wrong here. |
Please check #444. It builds for sm_60, but I haven't tested if it actually runs. |
I'm waiting on cards from Ebay but will do try when I get them. Thanks! |
Can't run it or it still says 鈥楻untimeError: CUDA error: no kernel image is available for execution on the device鈥欙紝 Using the latest image of alpindale/aphrodite-engine |
@online2311 we forgot to bump the build architectures in the dockerfile, this will be fixed by the next release. If you want to build it yourself, edit the Dockerfile like this: diff --git a/docker/Dockerfile b/docker/Dockerfile
index adcdeb1..330f89c 100644
--- a/docker/Dockerfile
+++ b/docker/Dockerfile
@@ -32,7 +32,7 @@ ENV CUDA_HOME=/usr/local/cuda
ENV HF_HOME=/tmp
ENV NUMBA_CACHE_DIR=$HF_HOME/numba_cache
-ENV TORCH_CUDA_ARCH_LIST="6.1 7.0 7.5 8.0 8.6 8.9 9.0+PTX"
+ENV TORCH_CUDA_ARCH_LIST="6.0 6.1 7.0 7.5 8.0 8.6 8.9 9.0+PTX"
RUN python3 -m pip install --no-cache-dir -e .
# Workaround to properly install flash-attn. For reference
@@ -44,7 +44,7 @@ ENTRYPOINT ["/app/aphrodite-engine/docker/entrypoint.sh"]
EXPOSE 7860
-# Service UID needs write access to $HOME to create temporary folders, see #458
+# Service UID needs write access to $HOME to create temporary folders, see #458
RUN chown 1000:1000 ${HOME}
USER 1000:0 |
Thank you very much, I recompiled the image according to your patch and now it is ready for model inference. |
馃殌 The feature, motivation and pitch
In the setup.py it checks for CUDA 6.1 as a minimum and that requirement is also stated in the readme. Is there a technical reason CUDA 6.0 is not supported? Is it for INT8 support?
I ask this because there is nothing inherently stopping VLLM which Aphrodite is forked from, from working with CUDA 6.0 on the Tesla P100 cards. As can be seen in this discussion: vllm-project/vllm#963 (comment)
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: