-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
minigpt4_video_demo Llama-2-7b-chat-hf #7
Comments
Hello @Mikes95 , from huggingface_hub import login
login(token="your generated token") for the pure python code without UI, you can try the inference file |
Thank you for the prompt response. I was also wondering how easy it is to perform fine-tuning on my own dataset and whether there is more detailed documentation than the README on how to format the videos and their corresponding texts. Thank you in advance. |
Hello @Mikes95 , thank you your nice question, I added a readme file for the steps about how to add a new dataset here Custom_training |
I'm trying to run demo code. I cloned the repository and installed all the dependencies. I downloaded video_llama_checkpoint_last.pth and placed it in the checkpoints folder. When I run python minigpt4_video_demo.py --ckpt /miniGPT4/MiniGPT4-video/checkpoints/video_llama_checkpoint_last.pth --cfg-path test_configs/llama2_test_config.yaml, I receive this error:
OSError: Can't load tokenizer for 'meta-llama/Llama-2-7b-chat-hf'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'meta-llama/Llama-2-7b-chat-hf' is the correct path to a directory containing all relevant files for a LlamaTokenizer.
How can I fix this?
Is there also a demo available purely in code, without using Streamlit for an HTML interface?
The text was updated successfully, but these errors were encountered: