Skip to content

ZrrSkywalker/LLaMA-Adapter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 

Repository files navigation

LLaMA-Adapter: Efficient Fine-tuning of LLaMA 🚀

The official codebase has been transferred to OpenGVLab/LLaMA-Adapter for better follow-up maintenance!

Citation

If you find our LLaMA-Adapter code and paper useful, please kindly cite:

@article{zhang2023llamaadapter,
  title = {LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention},
  author={Zhang, Renrui and Han, Jiaming and Zhou, Aojun and Hu, Xiangfei and Yan, Shilin and Lu, Pan and Li, Hongsheng and Gao, Peng and Qiao, Yu},
  journal={arXiv preprint arXiv:2303.16199},
  year={2023}
}
@article{gao2023llamaadapterv2,
  title = {LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model},
  author={Gao, Peng and Han, Jiaming and Zhang, Renrui and Lin, Ziyi and Geng, Shijie and Zhou, Aojun and Zhang, Wei and Lu, Pan and He, Conghui and Yue, Xiangyu and Li, Hongsheng and Qiao, Yu},
  journal={arXiv preprint arXiv:2304.15010},
  year={2023}
}

About

Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published