Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Torch not compiled with CUDA enabled #7

Open
prashantvidja opened this issue Dec 21, 2022 · 3 comments
Open

Torch not compiled with CUDA enabled #7

prashantvidja opened this issue Dec 21, 2022 · 3 comments

Comments

@prashantvidja
Copy link

Hi,

Run spacy with the "extend" but got below error.

(extend) ubuntu@ip-172-31-3-241:~/extend$ python spacy_test.py 
Some weights of the model checkpoint at allenai/longformer-large-4096 were not used when initializing LongformerForQuestionAnswering: ['lm_head.layer_norm.weight', 'lm_head.bias', 'lm_head.decoder.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.dense.weight']
- This IS expected if you are initializing LongformerForQuestionAnswering from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing LongformerForQuestionAnswering from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of LongformerForQuestionAnswering were not initialized from the model checkpoint at allenai/longformer-large-4096 and are newly initialized: ['qa_outputs.weight', 'qa_outputs.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Traceback (most recent call last):
  File "spacy_test.py", line 13, in <module>
    nlp.add_pipe("extend", after="ner", config=extend_config)
  File "/home/ubuntu/miniconda3/envs/extend/lib/python3.8/site-packages/spacy/language.py", line 792, in add_pipe
    pipe_component = self.create_pipe(
  File "/home/ubuntu/miniconda3/envs/extend/lib/python3.8/site-packages/spacy/language.py", line 674, in create_pipe
    resolved = registry.resolve(cfg, validate=validate)
  File "/home/ubuntu/miniconda3/envs/extend/lib/python3.8/site-packages/thinc/config.py", line 746, in resolve
    resolved, _ = cls._make(
  File "/home/ubuntu/miniconda3/envs/extend/lib/python3.8/site-packages/thinc/config.py", line 795, in _make
    filled, _, resolved = cls._fill(
  File "/home/ubuntu/miniconda3/envs/extend/lib/python3.8/site-packages/thinc/config.py", line 867, in _fill
    getter_result = getter(*args, **kwargs)
  File "/home/ubuntu/extend/extend/spacy_component.py", line 86, in __init__
    self.model = load_checkpoint(checkpoint_path, device)
  File "/home/ubuntu/extend/extend/spacy_component.py", line 24, in load_checkpoint
    model.to(torch.device(device))
  File "/home/ubuntu/miniconda3/envs/extend/lib/python3.8/site-packages/pytorch_lightning/core/mixins/device_dtype_mixin.py", line 111, in to
    return super().to(*args, **kwargs)
  File "/home/ubuntu/miniconda3/envs/extend/lib/python3.8/site-packages/torch/nn/modules/module.py", line 852, in to
    return self._apply(convert)
  File "/home/ubuntu/miniconda3/envs/extend/lib/python3.8/site-packages/torch/nn/modules/module.py", line 530, in _apply
    module._apply(fn)
  File "/home/ubuntu/miniconda3/envs/extend/lib/python3.8/site-packages/torch/nn/modules/module.py", line 530, in _apply
    module._apply(fn)
  File "/home/ubuntu/miniconda3/envs/extend/lib/python3.8/site-packages/torch/nn/modules/module.py", line 530, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "/home/ubuntu/miniconda3/envs/extend/lib/python3.8/site-packages/torch/nn/modules/module.py", line 552, in _apply
    param_applied = fn(param)
  File "/home/ubuntu/miniconda3/envs/extend/lib/python3.8/site-packages/torch/nn/modules/module.py", line 850, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
  File "/home/ubuntu/miniconda3/envs/extend/lib/python3.8/site-packages/torch/cuda/__init__.py", line 166, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

I think we can't install Cuda without GPU on the machine.
Can we solve this without GPU?

Thanks

@edobobo
Copy link
Collaborator

edobobo commented Dec 21, 2022

Hi, you should set the device to -1 in the extend_config.
Like this:

extend_config = dict(
    checkpoint_path="<ckpt-path>",
    mentions_inventory_path="<inventory-path>",
    device=-1,
    tokens_per_batch=4000,
)

@prashantvidja
Copy link
Author

Hi thanks, it worked.

Could you check the below output? look like showing the wrong result

Some weights of the model checkpoint at allenai/longformer-large-4096 were not used when initializing LongformerForQuestionAnswering: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.decoder.weight', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.dense.bias']
- This IS expected if you are initializing LongformerForQuestionAnswering from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing LongformerForQuestionAnswering from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of LongformerForQuestionAnswering were not initialized from the model checkpoint at allenai/longformer-large-4096 and are newly initialized: ['qa_outputs.bias', 'qa_outputs.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
/home/ubuntu/miniconda3/envs/extend/lib/python3.8/site-packages/torch/cuda/amp/autocast_mode.py:120: UserWarning: torch.cuda.amp.autocast only affects CUDA ops, but CUDA is not available.  Disabling.
  warnings.warn("torch.cuda.amp.autocast only affects CUDA ops, but CUDA is not available.  Disabling.")
2022-12-21 18:06:36.064 INFO    classy.data.dataset.base: Dataset finished: 2 number of elements processed
[('Japan', 'Japan'), ('2-1', None), ('Syria', 'Syria national football team'), ('Friday', None)]

@prashantvidja
Copy link
Author

HI @edobobo

Could you please tell us how can we update the wiki data? In our testing, we find out that may be it is not giving results for newer terms.

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants