Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

no module named 'torch__.C' #42

Closed
ChloeJKim opened this issue Sep 16, 2020 · 19 comments
Closed

no module named 'torch__.C' #42

ChloeJKim opened this issue Sep 16, 2020 · 19 comments

Comments

@ChloeJKim
Copy link

Hi
Thank you for your amazing work and for publishing the code!

While replicating your work on making predictions on the existing dataset I encountered the following error: can you please help me out?

allennlp predict ./scripts/pretrained/genia-lightweight.tar.gz \ ./scripts/processed_data/json-coref-ident-only/test.json \ --predictor dygie \ --include-package dygie \ --use-dataset-reader \ --output-file predictions/genia-test.jsonl \ --cuda-device 0

image

Thank you!

@DerekChia
Copy link

Seems like you do not have torch installed. Do you have all the dependencies installed? https://github.com/dwadden/dygiepp/tree/allennlp-v1#dependencies

@dwadden
Copy link
Owner

dwadden commented Sep 16, 2020

Thanks @DerekChia! @ChloeJKim, I agree this looks like a PyTorch issue unfortunately. If you're totally stuck let me know and I can try to help debug. Closing for now.

@dwadden dwadden closed this as completed Sep 16, 2020
@ChloeJKim
Copy link
Author

ChloeJKim commented Sep 16, 2020

I do have torch installed

image

but still get the same error, and I m really stuck..can you help me debugging? I really want to replicate your work :)

Thanks!

@DerekChia
Copy link

Seems like this is really an issue with Pytorch. I did a quick search and found this (pytorch/pytorch#574). Perhaps you can uninstall and install torch again?

@dwadden dwadden reopened this Sep 17, 2020
@dwadden
Copy link
Owner

dwadden commented Sep 17, 2020

@ChloeJKim let me know if this issue resolves things for you.

@ChloeJKim
Copy link
Author

I have tried to install docker to figure out the dependency issue but now running into docker issues as well. Can you help?

I was trying to replicate this
image

and got this error
image

@dwadden
Copy link
Owner

dwadden commented Sep 17, 2020

Docker was added by a contributor @GillesJ, unfortunately I don't have bandwidth to offer support for it. Let's try to get it working without Docker.

Are you interested in training your own model, or in making predictions on an existing dataset?

Also - can you confirm that you created a clean Conda environment and followed the instructions exactly as in the dependencies instructions of the README?

@ChloeJKim
Copy link
Author

Sure, to start fresh, I created a new clean environment following your instruction
steps i did:
conda create -n new python=3.7
conda activate new
cd dygiepp
pip install -r requirements.txt
But while installing, following error occurs:
image
image

image
image
image

@ChloeJKim
Copy link
Author

In response to this question:
Are you interested in training your own model, or in making predictions on an existing dataset?

I want to first make predictions on existing datasets (what you have for genia) to see the results and possibly predict on my datasets in the future.

image

How long does the prediction take on genia processed data? do you know?

Thank you!

@dwadden
Copy link
Owner

dwadden commented Sep 17, 2020

I'm looking through the install logs and it looks like there's some issues getting some of the dependencies installed. A few quick observations:

Unfortunately, I can't offer in-depth support when it comes to dealing with dependencies, I think your best bet if my observations don't help is to create GitHub issues in the relevant repositories (e.g. python-Levenshtein). Once you get the dependencies installed I can help with anything in the DyGIE code.

On making predictions: It's pretty quick. Predicting on GENIA takes a couple minutes.

@GillesJ
Copy link
Contributor

GillesJ commented Sep 17, 2020

@ChloeJKim
Copy link
Author

ChloeJKim commented Sep 18, 2020

Thank you guys, I installed pytorch again and it fixed problem
however, while running prediction (below commands)

allennlp predict scripts/pretrained/genia-lightweight.tar.gz \ scripts/processed_data/json-coref-ident-only/test.json \ --predictor dygie \ --include-package dygie \ --use-dataset-reader \ --output-file scripts/predictions/genia-test.jsonl \ --cuda-device 0

I encountered this error.
image
image

@ChloeJKim
Copy link
Author

on top of this, what would be the output of prediction on genia data?
can you provide an example?

Thank you!

@dwadden
Copy link
Owner

dwadden commented Sep 18, 2020

I think you're getting an error because your backslashes aren't followed by newlines: https://superuser.com/questions/794963/in-a-linux-shell-why-does-backslash-newline-not-introduce-whitespace. Either preserve the newlines from the example in the README, or remove the backslashes in your command.

The output will be formatted as described here https://github.com/dwadden/dygiepp/blob/master/doc/data.md. Predicted fields will have the word predicted prepended, for instance predicted_ner.

@ChloeJKim
Copy link
Author

Thank you @dwadden

after getting rid of basckslahses, I got following attribute error:
(dygiepp) kimc26@ng008:~/dygiepp % allennlp predict scripts/pretrained/genia-lightweight.ta r.gz scripts/processed_data/json-coref-ident-only/test.json --predictor dygie --include-pac kage dygie --use-dataset-reader --output-file scripts/predictions/genia-test.jsonl --cuda-d evice 0

2020-09-18 11:24:40,258 - INFO - pytorch_pretrained_bert.modeling - Better speed can be ach ieved with apex installed from https://www.github.com/nvidia/apex .
2020-09-18 11:24:48,737 - INFO - pytorch_transformers.modeling_bert - Better speed can be a chieved with apex installed from https://www.github.com/nvidia/apex .
2020-09-18 11:24:48,781 - INFO - pytorch_transformers.modeling_xlnet - Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex .
2020-09-18 11:24:51,058 - INFO - allennlp.common.registrable - instantiating registered sub class relu of <class 'allennlp.nn.activations.Activation'>
2020-09-18 11:24:51,060 - INFO - allennlp.common.registrable - instantiating registered sub class relu of <class 'allennlp.nn.activations.Activation'>
2020-09-18 11:24:51,063 - INFO - allennlp.common.registrable - instantiating registered sub class relu of <class 'allennlp.nn.activations.Activation'>
2020-09-18 11:24:51,065 - INFO - allennlp.common.registrable - instantiating registered sub class relu of <class 'allennlp.nn.activations.Activation'>
2020-09-18 11:24:52,335 - INFO - allennlp.models.archival - loading archive file scripts/pr etrained/genia-lightweight.tar.gz
2020-09-18 11:24:52,335 - INFO - allennlp.models.archival - extracting archive file scripts /pretrained/genia-lightweight.tar.gz to temp dir /local/27708790/tmprpnoy3ay
2020-09-18 11:24:57,153 - INFO - allennlp.common.registrable - instantiating registered sub class dygie of <class 'allennlp.models.model.Model'>
2020-09-18 11:24:57,153 - INFO - allennlp.common.params - type = default
2020-09-18 11:24:57,153 - INFO - allennlp.common.registrable - instantiating registered sub class default of <class 'allennlp.data.vocabulary.Vocabulary'>
2020-09-18 11:24:57,153 - INFO - allennlp.data.vocabulary - Loading token dictionary from / local/27708790/tmprpnoy3ay/vocabulary.
2020-09-18 11:24:57,155 - INFO - allennlp.common.from_params - instantiating class <class ' allennlp.models.model.Model'> from params {'co_train': False, 'context_layer': {'input_dim' : 768, 'type': 'pass_through'}, 'display_metrics': ['ner_precision', 'ner_recall', 'ner_f1' ], 'feature_size': 20, 'initializer': [['_span_width_embedding.weight', {'type': 'xavier_no rmal'}], ['context_layer.module.weight_ih.', {'type': 'xavier_normal'}], ['_context_laye r._module.weight_hh.', {'type': 'orthogonal'}]], 'lexical_dropout': 0.5, 'loss_weights': { 'coref': 0, 'events': 0, 'ner': 1, 'relation': 0}, 'lstm_dropout': 0, 'max_span_width': 8, 'modules': {'coref': {'antecedent_feedforward': {'activations': 'relu', 'dropout': 0.4, 'hi dden_dims': 150, 'input_dim': 4688, 'num_layers': 2}, 'coref_prop': 0, 'initializer': [['.* weight', {'type': 'xavier_normal'}], ['.*weight_matrix', {'type': 'xavier_normal'}]], 'max antecedents': 100, 'mention_feedforward': {'activations': 'relu', 'dropout': 0.4, 'hidden_d ims': 150, 'input_dim': 1556, 'num_layers': 2}, 'span_emb_dim': 1556, 'spans_per_word': 0.3 }, 'events': {'argument_feedforward': {'activations': 'relu', 'dropout': 0.4, 'hidden_dims' : 150, 'input_dim': 2546, 'num_layers': 2}, 'argument_spans_per_word': 0.8, 'cls_projection ': {'activations': 'relu', 'dropout': 0.4, 'hidden_dims': 200, 'input_dim': 768, 'num_layer s': 1}, 'context_attention': {'matrix_1_dim': 2346, 'matrix_2_dim': 768}, 'context_window': 0, 'entity_beam': False, 'event_args_gold_candidates': False, 'event_args_label_emb': 10, 'event_args_label_predictor': 'hard', 'event_args_use_ner_labels': False, 'event_args_use_t rigger_labels': False, 'initializer': [['.*weight', {'type': 'xavier_normal'}], ['.*weight matrix', {'type': 'xavier_normal'}]], 'label_embedding_method': 'one_hot', 'loss_weights': {'arguments': 1, 'trigger': 1}, 'mention_feedforward': {'activations': 'relu', 'dropout': 0 .4, 'hidden_dims': 150, 'input_dim': 1556, 'num_layers': 2}, 'positive_label_weight': 1, 's hared_attention_context': False, 'softmax_correction': False, 'span_prop': {'emb_dim': 1556 , 'n_span_prop': 0}, 'trigger_attention': {'input_dim': 768, 'type': 'pass_through'}, 'trig ger_attention_context': False, 'trigger_candidate_feedforward': {'activations': 'relu', 'dr opout': 0.4, 'hidden_dims': 150, 'input_dim': 768, 'num_layers': 2}, 'trigger_feedforward': {'activations': 'relu', 'dropout': 0.4, 'hidden_dims': 150, 'input_dim': 968, 'num_layers' : 2}, 'trigger_spans_per_word': 0.3}, 'ner': {'initializer': [['.weight', {'type': 'xavier normal'}], ['.*weight_matrix', {'type': 'xavier_normal'}]], 'mention_feedforward': {'activ ations': 'relu', 'dropout': 0.4, 'hidden_dims': 150, 'input_dim': 1556, 'num_layers': 2}}, 'relation': {'initializer': [['.*weight', {'type': 'xavier_normal'}], ['.*weight_matrix', { 'type': 'xavier_normal'}]], 'mention_feedforward': {'activations': 'relu', 'dropout': 0.4, 'hidden_dims': 150, 'input_dim': 1556, 'num_layers': 2}, 'positive_label_weight': 1, 'rel_p rop': 0, 'rel_prop_dropout_A': 0, 'rel_prop_dropout_f': 0, 'relation_feedforward': {'activa tions': 'relu', 'dropout': 0.4, 'hidden_dims': 150, 'input_dim': 4668, 'num_layers': 2}, 's pan_emb_dim': 1556, 'spans_per_word': 0.5}}, 'text_field_embedder': {'allow_unmatched_keys' : True, 'embedder_to_indexer_map': {'bert': ['bert', 'bert-offsets'], 'token_characters': [ 'token_characters']}, 'token_embedders': {'bert': {'pretrained_model': 'pretrained/scibert scivocab_cased/weights.tar.gz', 'requires_grad': True, 'type': 'bert-pretrained'}}}, 'type' : 'dygie', 'use_attentive_span_extractor': False} and extras {'vocab'}
2020-09-18 11:24:57,155 - INFO - allennlp.common.params - model.type = dygie
2020-09-18 11:24:57,156 - INFO - allennlp.common.from_params - instantiating class <class ' dygie.models.dygie.DyGIE'> from params {'co_train': False, 'context_layer': {'input_dim': 7 68, 'type': 'pass_through'}, 'display_metrics': ['ner_precision', 'ner_recall', 'ner_f1'], 'feature_size': 20, 'initializer': [['_span_width_embedding.weight', {'type': 'xavier_norma l'}], ['_context_layer._module.weight_ih.
', {'type': 'xavier_normal'}], ['context_layer. module.weight_hh.*', {'type': 'orthogonal'}]], 'lexical_dropout': 0.5, 'loss_weights': {'co ref': 0, 'events': 0, 'ner': 1, 'relation': 0}, 'lstm_dropout': 0, 'max_span_width': 8, 'mo dules': {'coref': {'antecedent_feedforward': {'activations': 'relu', 'dropout': 0.4, 'hidde n_dims': 150, 'input_dim': 4688, 'num_layers': 2}, 'coref_prop': 0, 'initializer': [['.*wei ght', {'type': 'xavier_normal'}], ['.*weight_matrix', {'type': 'xavier_normal'}]], 'max_ant ecedents': 100, 'mention_feedforward': {'activations': 'relu', 'dropout': 0.4, 'hidden_dims ': 150, 'input_dim': 1556, 'num_layers': 2}, 'span_emb_dim': 1556, 'spans_per_word': 0.3}, 'events': {'argument_feedforward': {'activations': 'relu', 'dropout': 0.4, 'hidden_dims': 1 50, 'input_dim': 2546, 'num_layers': 2}, 'argument_spans_per_word': 0.8, 'cls_projection': {'activations': 'relu', 'dropout': 0.4, 'hidden_dims': 200, 'input_dim': 768, 'num_layers': 1}, 'context_attention': {'matrix_1_dim': 2346, 'matrix_2_dim': 768}, 'context_window': 0, 'entity_beam': False, 'event_args_gold_candidates': False, 'event_args_label_emb': 10, 'ev ent_args_label_predictor': 'hard', 'event_args_use_ner_labels': False, 'event_args_use_trig ger_labels': False, 'initializer': [['.*weight', {'type': 'xavier_normal'}], ['.*weight_mat rix', {'type': 'xavier_normal'}]], 'label_embedding_method': 'one_hot', 'loss_weights': {'a rguments': 1, 'trigger': 1}, 'mention_feedforward': {'activations': 'relu', 'dropout': 0.4, 'hidden_dims': 150, 'input_dim': 1556, 'num_layers': 2}, 'positive_label_weight': 1, 'shar ed_attention_context': False, 'softmax_correction': False, 'span_prop': {'emb_dim': 1556, ' n_span_prop': 0}, 'trigger_attention': {'input_dim': 768, 'type': 'pass_through'}, 'trigger _attention_context': False, 'trigger_candidate_feedforward': {'activations': 'relu', 'dropo ut': 0.4, 'hidden_dims': 150, 'input_dim': 768, 'num_layers': 2}, 'trigger_feedforward': {' activations': 'relu', 'dropout': 0.4, 'hidden_dims': 150, 'input_dim': 968, 'num_layers': 2 }, 'trigger_spans_per_word': 0.3}, 'ner': {'initializer': [['.*weight', {'type': 'xavier_no rmal'}], ['.*weight_matrix', {'type': 'xavier_normal'}]], 'mention_feedforward': {'activati ons': 'relu', 'dropout': 0.4, 'hidden_dims': 150, 'input_dim': 1556, 'num_layers': 2}}, 're lation': {'initializer': [['.*weight', {'type': 'xavier_normal'}], ['.*weight_matrix', {'ty pe': 'xavier_normal'}]], 'mention_feedforward': {'activations': 'relu', 'dropout': 0.4, 'hi dden_dims': 150, 'input_dim': 1556, 'num_layers': 2}, 'positive_label_weight': 1, 'rel_prop ': 0, 'rel_prop_dropout_A': 0, 'rel_prop_dropout_f': 0, 'relation_feedforward': {'activatio ns': 'relu', 'dropout': 0.4, 'hidden_dims': 150, 'input_dim': 4668, 'num_layers': 2}, 'span _emb_dim': 1556, 'spans_per_word': 0.5}}, 'text_field_embedder': {'allow_unmatched_keys': T rue, 'embedder_to_indexer_map': {'bert': ['bert', 'bert-offsets'], 'token_characters': ['to ken_characters']}, 'token_embedders': {'bert': {'pretrained_model': 'pretrained/scibert_sci vocab_cased/weights.tar.gz', 'requires_grad': True, 'type': 'bert-pretrained'}}}, 'use_atte ntive_span_extractor': False} and extras {'vocab'}
2020-09-18 11:24:57,156 - INFO - allennlp.common.from_params - instantiating class <class ' allennlp.modules.text_field_embedders.text_field_embedder.TextFieldEmbedder'> from params { 'allow_unmatched_keys': True, 'embedder_to_indexer_map': {'bert': ['bert', 'bert-offsets'], 'token_characters': ['token_characters']}, 'token_embedders': {'bert': {'pretrained_model' : 'pretrained/scibert_scivocab_cased/weights.tar.gz', 'requires_grad': True, 'type': 'bert- pretrained'}}} and extras {'vocab'}
2020-09-18 11:24:57,156 - INFO - allennlp.common.params - model.text_field_embedder.type = basic
2020-09-18 11:24:57,156 - INFO - allennlp.common.params - model.text_field_embedder.allow_u nmatched_keys = True
2020-09-18 11:24:57,156 - INFO - allennlp.common.from_params - instantiating class <class ' allennlp.modules.token_embedders.token_embedder.TokenEmbedder'> from params {'pretrained_mo del': 'pretrained/scibert_scivocab_cased/weights.tar.gz', 'requires_grad': True, 'type': 'b ert-pretrained'} and extras {'vocab'}
2020-09-18 11:24:57,156 - INFO - allennlp.common.params - model.text_field_embedder.token_e mbedders.bert.type = bert-pretrained
2020-09-18 11:24:57,156 - INFO - allennlp.common.from_params - instantiating class <class ' allennlp.modules.token_embedders.bert_token_embedder.PretrainedBertEmbedder'> from params { 'pretrained_model': 'pretrained/scibert_scivocab_cased/weights.tar.gz', 'requires_grad': Tr ue} and extras {'vocab'}
2020-09-18 11:24:57,157 - INFO - allennlp.common.params - model.text_field_embedder.token_e mbedders.bert.pretrained_model = pretrained/scibert_scivocab_cased/weights.tar.gz
2020-09-18 11:24:57,157 - INFO - allennlp.common.params - model.text_field_embedder.token_e mbedders.bert.requires_grad = True
2020-09-18 11:24:57,157 - INFO - allennlp.common.params - model.text_field_embedder.token_e mbedders.bert.top_layer_only = False
2020-09-18 11:24:57,157 - INFO - allennlp.common.params - model.text_field_embedder.token_e mbedders.bert.scalar_mix_parameters = None
2020-09-18 11:24:57,157 - ERROR - pytorch_pretrained_bert.modeling - Model name 'pretrained /scibert_scivocab_cased/weights.tar.gz' was not found in model name list (bert-base-uncased , bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, be rt-base-multilingual-cased, bert-base-chinese). We assumed 'pretrained/scibert_scivocab_cas ed/weights.tar.gz' was a path or url but couldn't find any file associated to this path or url.
Traceback (most recent call last):
File "/gstore/home/kimc26/.conda/envs/dygiepp/bin/allennlp", line 8, in
sys.exit(run())
File "/gstore/home/kimc26/.conda/envs/dygiepp/lib/python3.7/site-packages/allennlp/run.py ", line 18, in run
main(prog="allennlp")
File "/gstore/home/kimc26/.conda/envs/dygiepp/lib/python3.7/site-packages/allennlp/comman ds/init.py", line 102, in main
args.func(args)
File "/gstore/home/kimc26/.conda/envs/dygiepp/lib/python3.7/site-packages/allennlp/comman ds/predict.py", line 214, in _predict
predictor = _get_predictor(args)
File "/gstore/home/kimc26/.conda/envs/dygiepp/lib/python3.7/site-packages/allennlp/comman ds/predict.py", line 120, in _get_predictor
overrides=args.overrides)
File "/gstore/home/kimc26/.conda/envs/dygiepp/lib/python3.7/site-packages/allennlp/models /archival.py", line 230, in load_archive
cuda_device=cuda_device)
File "/gstore/home/kimc26/.conda/envs/dygiepp/lib/python3.7/site-packages/allennlp/models /model.py", line 327, in load
return cls.by_name(model_type)._load(config, serialization_dir, weights_file, cuda_devi ce)
File "/gstore/home/kimc26/.conda/envs/dygiepp/lib/python3.7/site-packages/allennlp/models /model.py", line 265, in _load
model = Model.from_params(vocab=vocab, params=model_params)
File "/gstore/home/kimc26/.conda/envs/dygiepp/lib/python3.7/site-packages/allennlp/common /from_params.py", line 365, in from_params
return subclass.from_params(params=params, **extras)
File "/gstore/home/kimc26/.conda/envs/dygiepp/lib/python3.7/site-packages/allennlp/common /from_params.py", line 386, in from_params
kwargs = create_kwargs(cls, params, **extras)
File "/gstore/home/kimc26/.conda/envs/dygiepp/lib/python3.7/site-packages/allennlp/common /from_params.py", line 133, in create_kwargs
kwargs[name] = construct_arg(cls, name, annotation, param.default, params, **extras)
File "/gstore/home/kimc26/.conda/envs/dygiepp/lib/python3.7/site-packages/allennlp/common /from_params.py", line 229, in construct_arg
return annotation.from_params(params=subparams, **subextras)
File "/gstore/home/kimc26/.conda/envs/dygiepp/lib/python3.7/site-packages/allennlp/common /from_params.py", line 365, in from_params
return subclass.from_params(params=params, **extras)
File "/gstore/home/kimc26/.conda/envs/dygiepp/lib/python3.7/site-packages/allennlp/module s/text_field_embedders/basic_text_field_embedder.py", line 160, in from_params
for name, subparams in token_embedder_params.items()
File "/gstore/home/kimc26/.conda/envs/dygiepp/lib/python3.7/site-packages/allennlp/module s/text_field_embedders/basic_text_field_embedder.py", line 160, in
for name, subparams in token_embedder_params.items()
File "/gstore/home/kimc26/.conda/envs/dygiepp/lib/python3.7/site-packages/allennlp/common /from_params.py", line 365, in from_params
return subclass.from_params(params=params, **extras)
File "/gstore/home/kimc26/.conda/envs/dygiepp/lib/python3.7/site-packages/allennlp/common /from_params.py", line 388, in from_params
return cls(**kwargs) # type: ignore
File "/gstore/home/kimc26/.conda/envs/dygiepp/lib/python3.7/site-packages/allennlp/module s/token_embedders/bert_token_embedder.py", line 272, in init
for param in model.parameters():
AttributeError: 'NoneType' object has no attribute 'parameters'
2020-09-18 11:24:57,248 - INFO - allennlp.models.archival - removing temporary unarchived model dir at /local/27708790/tmprpnoy3ay

An idea on this?

@ChloeJKim
Copy link
Author

ChloeJKim commented Sep 18, 2020 via email

@dwadden
Copy link
Owner

dwadden commented Sep 19, 2020

It looks like you don't have SciBERT installed and place in the correct folder. Please follow the instructions in the README to download SciBERT.

GENIA only has named entities, not relations. I don't remember the schema offhand, but once you've processed the data you can examine the named entity labels to determine this.

@dwadden
Copy link
Owner

dwadden commented Sep 21, 2020

I just finalized a fairly substantial code upgrade. You no longer have to download SciBERT to get things working. The new docs should provide enough info to get predictions working.

@dwadden
Copy link
Owner

dwadden commented Oct 2, 2020

Closing this for lack of activity. Feel free to reopen if you're still having problems.

@dwadden dwadden closed this as completed Oct 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants