Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use the local cached model? #822

Open
LXRee opened this issue May 7, 2024 · 5 comments
Open

How to use the local cached model? #822

LXRee opened this issue May 7, 2024 · 5 comments

Comments

@LXRee
Copy link

LXRee commented May 7, 2024

hello, I have downloaded the model from huggingface and stored it in my local disk, but it seems it can not find my local stored model and try to download it online. The error is below
微信截图_20240507170117

@trungkienbkhn
Copy link
Collaborator

@LXRee , hello. You don't need to pass the local_files_only and cache_dir options, try replacing model_size with the local path of the model you downloaded instead of the model's name.
Ex:

model = WhisperModel("/path/faster-distil-whisper-large-v3", device="cuda")

@LXRee
Copy link
Author

LXRee commented May 9, 2024 via email

@LXRee
Copy link
Author

LXRee commented May 11, 2024 via email

@trungkienbkhn
Copy link
Collaborator

Yes, you should update to CUDA 12 for latest version of FW 1.0.2. If you use CUDA 11, you can try to downgrade ctranslate2 module to 3.24.0 as mentioned in here.

@LXRee
Copy link
Author

LXRee commented May 11, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants