Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pytorch two devices error #55

Open
mystsec opened this issue May 8, 2023 · 1 comment
Open

Pytorch two devices error #55

mystsec opened this issue May 8, 2023 · 1 comment

Comments

@mystsec
Copy link

mystsec commented May 8, 2023

When I run:

model = SimpleT5()
model.device = torch.device("cuda")
model.from_pretrained("t5","t5-large")
print(model.predict("summarize: "+text)[0])

I get the error:

Traceback (most recent call last):
  File "/home/user/MyApp/summarize.py", line 69, in <module>
    print(titlecase(model.predict("summarize: "+context)[0]))
  File "/home/user/.local/lib/python3.10/site-packages/simplet5/simplet5.py", line 464, in predict
    generated_ids = self.model.generate(
  File "/home/user/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/transformers/generation_utils.py", line 1088, in generate
    model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
  File "/home/user/.local/lib/python3.10/site-packages/transformers/generation_utils.py", line 507, in _prepare_encoder_decoder_kwargs_for_generation
    model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 912, in forward
    inputs_embeds = self.embed_tokens(input_ids)
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 162, in forward
    return F.embedding(
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/functional.py", line 2210, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)

How do I ensure that all tensors are on the gpu?

@Vinitrajputt
Copy link

When I run:

model = SimpleT5()
model.device = torch.device("cuda")
model.from_pretrained("t5","t5-large")
print(model.predict("summarize: "+text)[0])

I get the error:

Traceback (most recent call last):
  File "/home/user/MyApp/summarize.py", line 69, in <module>
    print(titlecase(model.predict("summarize: "+context)[0]))
  File "/home/user/.local/lib/python3.10/site-packages/simplet5/simplet5.py", line 464, in predict
    generated_ids = self.model.generate(
  File "/home/user/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/transformers/generation_utils.py", line 1088, in generate
    model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
  File "/home/user/.local/lib/python3.10/site-packages/transformers/generation_utils.py", line 507, in _prepare_encoder_decoder_kwargs_for_generation
    model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 912, in forward
    inputs_embeds = self.embed_tokens(input_ids)
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 162, in forward
    return F.embedding(
  File "/home/user/.local/lib/python3.10/site-packages/torch/nn/functional.py", line 2210, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)

How do I ensure that all tensors are on the gpu?

you can use this-

#to load the model
from simplet5 import SimpleT5
model = SimpleT5()
model.from_pretrained(model_type="t5", model_name="google/flan-t5-base")

#to strat train it on gpu
model.train(train_df=train_df[:1000],
eval_df=test_df[:100],
source_max_token_len=128,
target_max_token_len=64,
batch_size=2, max_epochs=5,use_gpu=Trure)

hope this will help you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants