You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i use llava_llama3_8b_instruct_qlora_clip_vit_large_p14_336_lora_e1_finetune.py to fineture on my dataset, and want to get a llava-llama38b multimodal model on my datasets.
after training and pth -> hf,
i got llm adapter, visual encoder adapter ,project.
/llava_train_20240506$ xtuner convert merge /home/fusionai/.cache/modelscope/hub/LLM-Research/Meta-Llama-3-8B- Instruct /home/fusionai/project/internllm_demo/llama3/llama3-ft/llava_train_20240506/iter_1000_hf/llm_adapter /home/fusionai/project/internllm_demo/llama3/llama3-ft/llava_train_ 20240506/iter_1000_llava
[2024-05-08 09:51:48,946] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.1
[WARNING] using untested triton version (2.1.0), only 1.0.0 is known to be compatible
[2024-05-08 09:51:53,816] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.1
[WARNING] using untested triton version (2.1.0), only 1.0.0 is known to be compatible
Loading checkpoint shards: 75%|██████████████████████████████████████████████████████████████████████████████████████▎ | Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:05<00:00, 1.25s/it]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Traceback (most recent call last):
File "/home/fusionai/project/internllm/xtuner/xtuner/tools/model_converters/merge.py", line 73, in <module>
main()
File "/home/fusionai/project/internllm/xtuner/xtuner/tools/model_converters/merge.py", line 56, in main
model_unmerged = PeftModel.from_pretrained(
File "/home/fusionai/anaconda3/envs/llama3/lib/python3.10/site-packages/peft/peft_model.py", line 324, in from_pretrained
config = PEFT_TYPE_TO_CONFIG_MAPPING[
File "/home/fusionai/anaconda3/envs/llama3/lib/python3.10/site-packages/peft/config.py", line 151, in from_pretrained
return cls.from_peft_type(**kwargs)
File "/home/fusionai/anaconda3/envs/llama3/lib/python3.10/site-packages/peft/config.py", line 118, in from_peft_type
return config_cls(**kwargs)
TypeError: LoraConfig.__init__() got an unexpected keyword argument 'layer_replication'
yes, it works! but it have obvious version conflicts between xtuner and lmdeploy on peft, i will try to install another venv env for lmdeploy again and continue.
i follow the turial for llama3 ft :https://github.com/SmartFlowAI/Llama3-Tutorial/blob/main/docs/llava.md
i use
llava_llama3_8b_instruct_qlora_clip_vit_large_p14_336_lora_e1_finetune.py
to fineture on my dataset, and want to get a llava-llama38b multimodal model on my datasets.after training and pth -> hf,
i got llm adapter, visual encoder adapter ,project.
but i can't merge llm +llm adapter together and can'get the LLM weights as turial
https://github.com/InternLM/xtuner/tree/main/xtuner/configs/llava/llama3_8b_instruct_clip_vit_large_p14_336
the error can be listed as following:
additon description:
how to solve this problem, waiting for help!
thx
The text was updated successfully, but these errors were encountered: