-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Issues: huggingface/peft
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[BugReport] init_lora_weights with pissa is not compatible with deepspeed stage3
#1826
opened Jun 5, 2024 by
wsp317
I met a problem when I use torch=2.0.0, I can't use loftq config in loraConfig
#1825
opened Jun 5, 2024 by
LiManyuan663
AdaLora: rank remains constant (to init_r value) across training
#1801
opened May 24, 2024 by
geoffvdr
2 of 4 tasks
How to finetune embeddings and LM head as a single layer when they are tied?
#1750
opened May 21, 2024 by
GokulNC
cannot import name 'get_peft_config' from 'peft' (unknown location)
#1748
opened May 20, 2024 by
jiyuwangbupt
4 tasks
Initialization for LoRA weights A and B initialized
#1728
opened May 13, 2024 by
sanaullah-06
4 tasks
TypeError: unsupported operand type(s) for *: 'Parameter' and 'NoneType'
#1721
opened May 9, 2024 by
misonsky
4 tasks
RuntimeError: only Tensors of floating point dtype can require gradients for QLoRA since transformers 4.40
#1720
opened May 9, 2024 by
dipanjanS
2 of 4 tasks
eval_loss showing Nan but train_loss decreases and goes to NaN after couple of steps while fine tuning gemma model with additional vocab
#1715
opened May 7, 2024 by
sidtandon2014
2 of 4 tasks
PeftModel failing to load after finetuning. Size Mismatch Error
#1710
opened May 4, 2024 by
sunxiaojie99
2 of 4 tasks
Previous Next
ProTip!
Exclude everything labeled
bug
with -label:bug.