You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I had some issues with my previous statement. What I meant was the model weights after fine-tuning. In previous projects I've worked on, they would only store the LoRA weights after fine-tuning. However, in this project, it stored all weight parameters after fine-tuning, and I would like to inquire whether it was an issue with my fine-tuning or if it was originally intended to be this way.
In the Lora project, storing all weight parameters post-fine-tuning is intentional and differs from some previous projects. This is to provide the complete model state for flexibility and use-case coverage.
Hope this helps!
We use lora , is the output the whole model
The text was updated successfully, but these errors were encountered: