Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when running solution to lab 1 #10

Open
toitimtoe opened this issue Jul 10, 2023 · 0 comments
Open

Error when running solution to lab 1 #10

toitimtoe opened this issue Jul 10, 2023 · 0 comments

Comments

@toitimtoe
Copy link

toitimtoe commented Jul 10, 2023

I ran into an error while running the solution for lab 1,

File ~/envs/nlpkernel/lib/python3.10/site-packages/transformers/training_args.py:1340, in TrainingArguments.__post_init__(self)
   1334     if version.parse(version.parse(torch.__version__).base_version) == version.parse("2.0.0") and self.fp16:
   1335         raise ValueError("--optim adamw_torch_fused with --fp16 requires PyTorch>2.0")
   1337 if (
   1338     self.framework == "pt"
   1339     and is_torch_available()
-> 1340     and (self.device.type != "cuda")
   1341     and (get_xla_device_type(self.device) != "GPU")
   1342     and (self.fp16 or self.fp16_full_eval)
   1343 ):
   1344     raise ValueError(
   1345         "FP16 Mixed precision training with AMP or APEX (`--fp16`) and FP16 half precision evaluation"
   1346         " (`--fp16_full_eval`) can only be used on CUDA devices."
   1347     )
   1349 if (
   1350     self.framework == "pt"
   1351     and is_torch_available()
   (...)
   1356     and (self.bf16 or self.bf16_full_eval)
   1357 ):

File ~/envs/nlpkernel/lib/python3.10/site-packages/transformers/training_args.py:1764, in TrainingArguments.device(self)
   1760 """
   1761 The device used by this process.
   1762 """
   1763 requires_backends(self, ["torch"])
-> 1764 return self._setup_devices

File ~/envs/nlpkernel/lib/python3.10/site-packages/transformers/utils/generic.py:54, in cached_property.__get__(self, obj, objtype)
     52 cached = getattr(obj, attr, None)
     53 if cached is None:
---> 54     cached = self.fget(obj)
     55     setattr(obj, attr, cached)
     56 return cached

File ~/envs/nlpkernel/lib/python3.10/site-packages/transformers/training_args.py:1672, in TrainingArguments._setup_devices(self)
   1670 if not is_sagemaker_mp_enabled():
   1671     if not is_accelerate_available(min_version="0.20.1"):
-> 1672         raise ImportError(
   1673             "Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`"
   1674         )
   1675     AcceleratorState._reset_state(reset_partial_state=True)
   1676 self.distributed_state = None

ImportError: Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`

The error message suggested changing this line

!pip install torch transformers datasets

Into

!pip install torch transformers[torch] datasets

or

!pip install torch transformer accelerate datasets
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant