-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
failed to convert convert llava-llama3 model to hf format #656
Comments
@Jason8Kang Hi! Meanwhile, please specific the |
@LZHgrla Thanks, it works. This is the warning. when I load the LLava_format by huggingface, it will fail. there maybe similar error. |
@Jason8Kang |
I also think so at first. but when I load the LLava_format in this way.
it will give warning. but when I run inference, it will give error |
@Jason8Kang hi, the offical xturner/llava-llama3 can be used for lmdeploy and work well. i get the same llava-llama3 huggingface transfer question , i just following your changed code This PR and solved the problem. but trained llava-llama3 weight can't be used for lmdeploy pipline, and the error can be listed as following:
and the lmdeploy code can be show as following:
|
@ztfmars |
thanks you, it works in your script. but it still has the warning as follows. do you know the meaning of this warning, I'm not sure whether it matters? |
@Jason8Kang |
In the project: https://github.com/InternLM/xtuner/tree/main/xtuner/configs/llava/llama3_8b_instruct_clip_vit_large_p14_336, it gives an examples how to convert llava-llama3 model to hf format:
python ./convert_xtuner_weights_to_hf.py --text_model_id ./iter_39620_xtuner --vision_model_id ./iter_39620_visual_encoder --projector_weight ./iter_39620_xtuner/projector/model.safetensors --save_path ./iter_39620_llava
I follow it in this way:
CUDA_VISIBLE_DEVICES=4 python ./xtuner/configs/llava/llama3_8b_instruct_clip_vit_large_p14_336/convert_xtuner_weights_to_hf.py --text_model_id ./llama3_llava_pth/merge --vision_model_id ${vit} --projector_weight llama3_llava_pth/hf/projector --save_path ./llama3_llava_pth/LLava_format
however I get the error
is there any error in my steps, thank you for you answer!
The text was updated successfully, but these errors were encountered: