You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add support for Weight-Decomposed Low-Rank Adaptation (DoRA).
"DoRA decomposes the pre-trained weight into two components, magnitude and direction, for fine-tuning, specifically employing LoRA for directional updates to efficiently minimize the number of trainable parameters. "
馃殌 Feature
Add support for Weight-Decomposed Low-Rank Adaptation (DoRA).
"DoRA decomposes the pre-trained weight into two components, magnitude and direction, for fine-tuning, specifically employing LoRA for directional updates to efficiently minimize the number of trainable parameters. "
Paper: https://arxiv.org/abs/2402.09353
Implementation in Peft: https://github.com/huggingface/peft/releases/tag/v0.9.0
Motivation
Authors claim that DoRA is more efficient w.r.t. param count as LoRA and achieves better results throughout all tests.
The text was updated successfully, but these errors were encountered: