Skip to content

Low rank and Sprase fineuning for foundation models

Notifications You must be signed in to change notification settings

harsh306/loramaster

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 

Repository files navigation

Efficient Fine-Tuning Methods for Large Language Models

This repository implements several efficient fine-tuning methods for large language models, including:

  • LoRA (Low-Rank Adaptation): A method for fine-tuning large language models with low-rank decomposition.
  • SoRA (Sparse LoRA): A method for fine-tuning large language models with sparse updates.
  • VeRA (VeRA: Vector-based Random Matrix Adaptation): A method for fine-tuning large language models with vectors.
  • AdaLoRA (Adaptive LoRA): A method for fine-tuning large language models with adaptive low-rank decomposition.
  • LoRA-FA (LoRA with Frozen A): A method for fine-tuning large language models with frozen pre-trained weights and low-rank decomposition.

Todo

  • Add more efficient fine-tuning methods.
  • Merge these weights with bacbone network or layers.
  • Learning rate scheduler.

Releases

No releases published

Packages

No packages published

Languages