You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
warnings.warn(
C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\torch\utils\checkpoint.py:90: UserWarning: None of the inputs have requires_grad=True. Gradients w
ill be None
ERROR | 2024-02-17 15:26:35 | autotrain.trainers.common:wrapper:91 - train has failed due to an exception: Traceback (most recent call last):
File "C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\autotrain\trainers\common.py", line 88, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\autotrain\trainers\clm_main_.py", line 475, in train
trainer.train()
File "C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\trl\trainer\sft_trainer.py", line 331, in train
output = super().train(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\transformers\trainer.py", line 1539, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\transformers\trainer.py", line 1869, in inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\transformers\trainer.py", line 2777, in training_step
self.accelerator.backward(loss)
File "C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\accelerate\accelerator.py", line 1966, in backward
loss.backward(**kwargs)
File "C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\torch\tensor.py", line 522, in backward
torch.autograd.backward(
File "C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\torch\autograd_init.py", line 266, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
ERROR | 2024-02-17 15:26:35 | autotrain.trainers.common:wrapper:92 - element 0 of tensors does not require grad and does not have a grad_fn
0%|
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi Abhishek please help on this
I am running the below command after removing quantization as you mentioned before
autotrain llm --train --project-name llm101 --model abhishek/llama-2-7b-hf-small-shards --data-path . --use-peft --quantization None --lr 2e-4 --train-batch-size 12 --epochs 3 --trainer sft
below is the error
warnings.warn(
C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\torch\utils\checkpoint.py:90: UserWarning: None of the inputs have requires_grad=True. Gradients w
ill be None
ERROR | 2024-02-17 15:26:35 | autotrain.trainers.common:wrapper:91 - train has failed due to an exception: Traceback (most recent call last):
File "C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\autotrain\trainers\common.py", line 88, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\autotrain\trainers\clm_main_.py", line 475, in train
trainer.train()
File "C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\trl\trainer\sft_trainer.py", line 331, in train
output = super().train(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\transformers\trainer.py", line 1539, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\transformers\trainer.py", line 1869, in inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\transformers\trainer.py", line 2777, in training_step
self.accelerator.backward(loss)
File "C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\accelerate\accelerator.py", line 1966, in backward
loss.backward(**kwargs)
File "C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\torch\tensor.py", line 522, in backward
torch.autograd.backward(
File "C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\torch\autograd_init.py", line 266, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
ERROR | 2024-02-17 15:26:35 | autotrain.trainers.common:wrapper:92 - element 0 of tensors does not require grad and does not have a grad_fn
0%|
Beta Was this translation helpful? Give feedback.
All reactions