Skip to content

Latest commit

 

History

History

fine-tuning_LLM_with_PEFT_QLoRA

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

Finetune Llama-2-7b using QLora on a Google colab

Explore instruction fine-tuning and how to address catastrophic forgetting in large language models (LLMs). Learn how to enhance precision tasks with instruction fine-tuning, even with smaller models and resource constraints.

Strategies Covered

Discover strategies like task-specific multitask fine-tuning and parameter-efficient fine-tuning (PEFT) to tackle catastrophic forgetting. Focus on PEFT's memory efficiency and its impact on LLMs.

Introducing LoRA and QLoRA

Learn about LoRA (Low-rank Adaptation) and QLoRA (Quantized Low-rank Adaptation), parameter-efficient fine-tuning techniques. Understand their benefits and differences.

Hands-On Implementation

Get hands-on experience with QLoRA implementation using Transformers and Bits & Bytes libraries. Includes model selection, training, saving, and sharing on the Hugging Face Hub. Instructions for model loading and text generation tasks are provided.

Explore Further

For a deeper dive into cutting-edge technology and to access all the technical knowledge, read our Medium Blog.

Access the Colab Notebook

Colab walkthrough - Open In Colab

Learn More in Our Blog

For a detailed understanding of PEFT,lora-Qlora technologies check out our blog post. It explains our approach in a clear and thorough manner.

Read the Blog Post