You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your wonderful work, but I had some issues running python minigpt4_video_inference.py on ubuntu system: Loading LLAMA
Traceback (most recent call last):
File "/share/data02/wuyiqidata/MiniGPT4-video/minigpt4_video_inference.py", line 165, in
model, vis_processor = init_model(args)
File "/share/data02/wuyiqidata/MiniGPT4-video/minigpt4/common/eval_utils.py", line 63, in init_model
model = model_cls.from_config(model_config).to('cuda:0')
File "/share/data02/wuyiqidata/MiniGPT4-video/minigpt4/models/mini_gpt4_llama_v2.py", line 765, in from_config
model = cls(
File "/share/data02/wuyiqidata/MiniGPT4-video/minigpt4/models/mini_gpt4_llama_v2.py", line 125, in init
self.llama_tokenizer = LlamaTokenizer.from_pretrained(llama_model,use_fast=False) #
File "/share/data02/wuyiqidata/Anaconda3/envs/minigpt4_video/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2013, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'meta-llama/Llama-2-7b-chat-hf'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'meta-llama/Llama-2-7b-chat-hf' is the correct path to a directory containing all relevant files for a LlamaTokenizer tokenizer.
how can I fix it?
The text was updated successfully, but these errors were encountered:
Thank you for your wonderful work, but I had some issues running python minigpt4_video_inference.py on ubuntu system: Loading LLAMA
Traceback (most recent call last):
File "/share/data02/wuyiqidata/MiniGPT4-video/minigpt4_video_inference.py", line 165, in
model, vis_processor = init_model(args)
File "/share/data02/wuyiqidata/MiniGPT4-video/minigpt4/common/eval_utils.py", line 63, in init_model
model = model_cls.from_config(model_config).to('cuda:0')
File "/share/data02/wuyiqidata/MiniGPT4-video/minigpt4/models/mini_gpt4_llama_v2.py", line 765, in from_config
model = cls(
File "/share/data02/wuyiqidata/MiniGPT4-video/minigpt4/models/mini_gpt4_llama_v2.py", line 125, in init
self.llama_tokenizer = LlamaTokenizer.from_pretrained(llama_model,use_fast=False) #
File "/share/data02/wuyiqidata/Anaconda3/envs/minigpt4_video/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2013, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'meta-llama/Llama-2-7b-chat-hf'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'meta-llama/Llama-2-7b-chat-hf' is the correct path to a directory containing all relevant files for a LlamaTokenizer tokenizer.
how can I fix it?
The text was updated successfully, but these errors were encountered: