Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to use python cuda libraries from a virtual env? Origianal whisper is super easy to install since it doesn't require me to change my system cuda version and simply pulls in the needed libs using pip. #153

Open
Gobz opened this issue Apr 15, 2023 · 22 comments

Comments

@Gobz
Copy link

Gobz commented Apr 15, 2023

No description provided.

@guillaumekln
Copy link
Contributor

Currently you can already make use of these libraries but you need to manually set the environment variable LD_LIBRARY_PATH before running Python. I verified that the following works:

pip install nvidia-cublas-cu11 nvidia-cudnn-cu11

export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'`

(Note that this only works on Linux systems.)

I will look if we can load these libraries automatically when they are installed. It's an improvement we should make in the underlying CTranslate2 library.

@guillaumekln guillaumekln pinned this issue May 4, 2023
@hoonlight
Copy link
Contributor

hoonlight commented Jun 10, 2023

Currently you can already make use of these libraries but you need to manually set the environment variable LD_LIBRARY_PATH before running Python. I verified that the following works:

pip install nvidia-cublas-cu11 nvidia-cudnn-cu11

export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'`

(Note that this only works on Linux systems.)

I will look if we can load these libraries automatically when they are installed. It's an improvement we should make in the underlying CTranslate2 library.

Is it possible to get it to work with VENV on Windows by any chance? I have cuda 11.8, cudnn 11.x all installed properly, but it's not working.

Could not load library cudnn_cnn_infer64_8.dll. Error code 126
Please make sure cudnn_cnn_infer64_8.dll is in your library path!

would installing pytorch help?

What's really weird is that a few days ago, it worked fine without any CUDA, CUDNN, zlib installation. After a clean install of windows, it doesn't work.

edited: The same error occurs even in a non-virtual environment. I did everything, including cuda 11.8 installation, cudnn 11.x installation, zlib installation, and path addition, but I don't know why this is happening.

@guillaumekln
Copy link
Contributor

This solution does not work on Windows because NVIDIA only provides Linux binaries for the cuDNN package:

https://pypi.org/project/nvidia-cudnn-cu11/#files

Installing PyTorch will not help in this case.

I did everything, including cuda 11.8 installation, cudnn 11.x installation, zlib installation, and path addition, but I don't know why this is happening.

Maybe you should double-check the PATH setting. I know it can be tricky to get it right.

You could also look at @Purfview's standalone executable: https://github.com/Purfview/whisper-standalone-win. There you can download the NVIDIA libraries and simply put them in the same directory as the executable.

@hoonlight
Copy link
Contributor

hoonlight commented Jun 12, 2023

This solution does not work on Windows because NVIDIA only provides Linux binaries for the cuDNN package:

https://pypi.org/project/nvidia-cudnn-cu11/#files

Installing PyTorch will not help in this case.

I did everything, including cuda 11.8 installation, cudnn 11.x installation, zlib installation, and path addition, but I don't know why this is happening.

Maybe you should double-check the PATH setting. I know it can be tricky to get it right.

You could also look at @Purfview's standalone executable: https://github.com/Purfview/whisper-standalone-win. There you can download the NVIDIA libraries and simply put them in the same directory as the executable.

Thanks, I'll try again to make sure I'm not missing anything.
I just have one question: is the installation of cuda, cudnn "essential" for running faster-whisper on a gpu?
I don't remember installing cudnn (before the windows reinstall), but only because device="cuda" just worked fine.
If they are required to be installed, my memory is probably wrong.

@guillaumekln
Copy link
Contributor

Yes these libraries are required for GPU execution. An error is raised when you try to use the GPU but these libraries cannot be found.

@yangyaofei
Copy link

According to this solution, I get the code below:

try:
    import os
    import nvidia.cublas.lib
    import nvidia.cudnn.lib

    cublas_path = os.path.dirname(nvidia.cublas.lib.__file__)
    cudnn_path = os.path.dirname(nvidia.cudnn.lib.__file__)
    os.environ["LD_LIBRARY_PATH"] = f"{cublas_path}:{cudnn_path}"
except ModuleNotFoundError:
    pass

But still get same error, like the PATH is not set.

@guillaumekln
Copy link
Contributor

The environment variable LD_LIBRARY_PATH should be set before starting the Python process.

@yangyaofei
Copy link

@guillaumekln oh, thank you to explain this. It seems like the once for all solution is not easy😂.

@Feanix-Fyre
Copy link

I'm pretty certain that NVIDIA offers cuDNN files of some sort for Windows at https://developer.nvidia.com/rdp/cudnn-download. I do remember getting faster-whisper to work on the GPU on Windows but I do remember it being quite the hassle, either due to my inexperience or genuine difficulty.

@guillaumekln
Copy link
Contributor

Yes, you can download cuDNN binaries for Windows on the NVIDIA website.

But this issue is about installing cuDNN via PyPI with:

pip install nvidia-cudnn-cu11

This does not work on Windows.

@lugia19
Copy link

lugia19 commented Jul 3, 2023

If you're on Windows, a functioning workaround I've found is to install torch with cuda support, then add the "lib" subfolder to your PATH.

It works since the lib folder contains DLLs for both cuBLAS and cuDNN.

@vadi2
Copy link

vadi2 commented Aug 1, 2023

The trick in #153 (comment) is not working for me, what am I doing wrong?

(faster-whisper) vadi@barbar:~$ pip install nvidia-cublas-cu11 nvidia-cudnn-cu11
Requirement already satisfied: nvidia-cublas-cu11 in ./Programs/miniconda3/envs/faster-whisper/lib/python3.10/site-packages (11.11.3.6)
Requirement already satisfied: nvidia-cudnn-cu11 in ./Programs/miniconda3/envs/faster-whisper/lib/python3.10/site-packages (8.9.2.26)
(faster-whisper) vadi@barbar:~$ export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'`
(faster-whisper) vadi@barbar:~$ echo $LD_LIBRARY_PATH 
/home/vadi/Programs/miniconda3/envs/faster-whisper/lib/python3.10/site-packages/nvidia/cublas/lib:/home/vadi/Programs/miniconda3/envs/faster-whisper/lib/python3.10/site-packages/nvidia/cudnn/lib
(faster-whisper) vadi@barbar:~$ python ~/Downloads/faster-whisper.py 
Traceback (most recent call last):
  File "/home/vadi/Downloads/faster-whisper.py", line 6, in <module>
    model = WhisperModel(model_size, device="cuda", compute_type="float16")
  File "/home/vadi/Programs/miniconda3/envs/faster-whisper/lib/python3.10/site-packages/faster_whisper/transcribe.py", line 124, in __init__
    self.model = ctranslate2.models.Whisper(
RuntimeError: CUDA failed with error unknown error
(faster-whisper) vadi@barbar:~$ 

Running Nvidia driver 535.54.03 on RTX 4080 on Ubuntu 22.04.

@guillaumekln
Copy link
Contributor

This is probably another issue. I think it happens when the GPU driver is not loaded correctly (e.g. it was just updated to a new version). Rebooting the system will often fix this type of error.

@vadi2
Copy link

vadi2 commented Aug 2, 2023

It did. Thanks!

@Simon-chai
Copy link

nvidia.cublas.lib.__file__

why my nvidia.cublas.lib.file attribute is None so the environment variable failed to set,and when i run faster_whisper and will encounter the error
"Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory
Please make sure libcudnn_ops_infer.so.8 is in your library path!"

@guillaumekln
Copy link
Contributor

The error is about cuDNN, not cuBLAS.

You should double-check that you correct installed the pip packages as shown in #153 (comment).

@s-h-a-d-o-w
Copy link

Currently you can already make use of these libraries but you need to manually set the environment variable LD_LIBRARY_PATH before running Python. I verified that the following works:

pip install nvidia-cublas-cu11 nvidia-cudnn-cu11

export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'`

(Note that this only works on Linux systems.)

I will look if we can load these libraries automatically when they are installed. It's an improvement we should make in the underlying CTranslate2 library.

Doesn't work for me on Ubuntu (WSL).

__file__ doesn't exist:

>>> print(nvidia.cublas.lib.__file__)
None

But the libraries are installed:

andy@work:~$ pip install nvidia-cublas-cu11 nvidia-cudnn-cu11
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: nvidia-cublas-cu11 in ./.local/lib/python3.8/site-packages (11.10.3.66)
Requirement already satisfied: nvidia-cudnn-cu11 in ./.local/lib/python3.8/site-packages (8.5.0.96)
Requirement already satisfied: setuptools in /usr/local/lib/python3.8/dist-packages (from nvidia-cublas-cu11) (68.2.2)
Requirement already satisfied: wheel in ./.local/lib/python3.8/site-packages (from nvidia-cublas-cu11) (0.40.0)
>>> import os; import nvidia.cublas.lib; import nvidia.cudnn.lib;
>>> print(nvidia.cublas.lib)
<module 'nvidia.cublas.lib' (namespace)>

Setting the paths manually like so worked:

export LD_LIBRARY_PATH="$HOME/.local/lib/python3.8/site-packages/nvidia/cublas/lib/:$HOME/.local/lib/python3.8/site-packages/nvidia/cudnn/lib/"

@gu-ma
Copy link

gu-ma commented Nov 30, 2023

Currently you can already make use of these libraries but you need to manually set the environment variable LD_LIBRARY_PATH before running Python. I verified that the following works:

pip install nvidia-cublas-cu11 nvidia-cudnn-cu11

export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'`

(Note that this only works on Linux systems.)

I will look if we can load these libraries automatically when they are installed. It's an improvement we should make in the underlying CTranslate2 library.

Very cool, thanks for the tip! Btw if you're using a conda env you can set the env var like this (in your environment):

conda env config vars set LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'`

It will overwrite the default path

@pauloboritza
Copy link

This solution does not work on Windows because NVIDIA only provides Linux binaries for the cuDNN package:
https://pypi.org/project/nvidia-cudnn-cu11/#files
Installing PyTorch will not help in this case.

I did everything, including cuda 11.8 installation, cudnn 11.x installation, zlib installation, and path addition, but I don't know why this is happening.

Maybe you should double-check the PATH setting. I know it can be tricky to get it right.
You could also look at @Purfview's standalone executable: https://github.com/Purfview/whisper-standalone-win. There you can download the NVIDIA libraries and simply put them in the same directory as the executable.

Thanks, I'll try again to make sure I'm not missing anything. I just have one question: is the installation of cuda, cudnn "essential" for running faster-whisper on a gpu? I don't remember installing cudnn (before the windows reinstall), but only because device="cuda" just worked fine. If they are required to be installed, my memory is probably wrong.

Error in Windows

....
RuntimeError: Library cublas64_11.dll is not found or cannot be loaded

download https://github.com/Purfview/whisper-standalone-win/releases/download/libs/cuBLAS.and.cuDNN_win_v3.zip
unizip *.dll in project path
add lines in the app:

import ctypes
cublas64_11 = ctypes.WinDLL('.\cublas64_11.dll')

@santiago-afonso
Copy link

If you're on Windows, a functioning workaround I've found is to install torch with cuda support, then add the "lib" subfolder to your PATH.

It works since the lib folder contains DLLs for both cuBLAS and cuDNN.

It's been a while, but what command did you use to install torch specifically?

@lugia19
Copy link

lugia19 commented Feb 7, 2024

If you're on Windows, a functioning workaround I've found is to install torch with cuda support, then add the "lib" subfolder to your PATH.
It works since the lib folder contains DLLs for both cuBLAS and cuDNN.

It's been a while, but what command did you use to install torch specifically?

Oh, it was just the usual pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 type deal.

Just installing torch with CUDA.

@louistiti
Copy link

Currently you can already make use of these libraries but you need to manually set the environment variable LD_LIBRARY_PATH before running Python. I verified that the following works:

pip install nvidia-cublas-cu11 nvidia-cudnn-cu11

export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'`

(Note that this only works on Linux systems.)

I will look if we can load these libraries automatically when they are installed. It's an improvement we should make in the underlying CTranslate2 library.

Hi @guillaumekln , may I know if this improvement has been made on CTranslate2? I'm using CUDA 12.1 and I'm facing the same issue. It works by exporting the LD_LIBRARY_PATH env variable though.

Happy to help for anything you'd need.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests