Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't load tokenizer for 'elinas/llama-7b-hf-transformers-4.29' #94

Open
xiaoxingchen505 opened this issue Jul 8, 2023 · 2 comments
Open

Comments

@xiaoxingchen505
Copy link

total vram = 96869.25
required vram(full=13858, 8bit=8254, 4bit=5140)
determined model type: alpaca
Traceback (most recent call last):
File "/home/xiaoxingchen/.conda/envs/llm-serve/lib/python3.9/site-packages/gradio/routes.py", line 437, in run_predict
output = await app.get_blocks().process_api(
File "/home/xiaoxingchen/.conda/envs/llm-serve/lib/python3.9/site-packages/gradio/blocks.py", line 1352, in process_api
result = await self.call_function(
File "/home/xiaoxingchen/.conda/envs/llm-serve/lib/python3.9/site-packages/gradio/blocks.py", line 1077, in call_function
prediction = await anyio.to_thread.run_sync(
File "/home/xiaoxingchen/.conda/envs/llm-serve/lib/python3.9/site-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/home/xiaoxingchen/.conda/envs/llm-serve/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/home/xiaoxingchen/.conda/envs/llm-serve/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/home/16tb_hdd/xxc/LLM-As-Chatbot/app.py", line 291, in download_completed
global_vars.initialize_globals(tmp_args)
File "/home/16tb_hdd/xxc/LLM-As-Chatbot/global_vars.py", line 176, in initialize_globals
model, tokenizer = load_model(
File "/home/16tb_hdd/xxc/LLM-As-Chatbot/models/alpaca.py", line 17, in load_model
tokenizer = LlamaTokenizer.from_pretrained(
File "/home/xiaoxingchen/.conda/envs/llm-serve/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1830, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'elinas/llama-7b-hf-transformers-4.29'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'elinas/llama-7b-hf-transformers-4.29' is the correct path to a directory containing all relevant files for a LlamaTokenizer tokenizer.

Hi, I'm having this issue right now.
Can anyone tell me how to fix it?

@deep-diver
Copy link
Owner

It seems like there is a sort of internal error in Hugging Face Hub Infra

@oldwizard1010
Copy link

remove --local-files-only flag

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants