Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues with score.sh on streaming transformer(mma) models #252

Open
cjw414 opened this issue Jan 13, 2021 · 0 comments
Open

Issues with score.sh on streaming transformer(mma) models #252

cjw414 opened this issue Jan 13, 2021 · 0 comments

Comments

@cjw414
Copy link

cjw414 commented Jan 13, 2021

Hello Hiro, after I tained streaming transformer model(mma) on Librispeech corpus, I tried to decode the results by score.sh.
It seemed to go well at first, but the decoding went wrong after few steps, with the error message as followed:

# This part is just a warning.
  0%|          | 0/2703 [00:00<?, ?it/s]/mnt/data1/jungwonchang/projects/neural_sp/neural_sp/models/modules/mocha.py:815: UserWarning: This overload of nonzero is deprecated:
  nonzero()
Consider using one of the following signatures instead:
  nonzero(*, bool as_tuple) (Triggered internally at  /opt/conda/conda-bld/pytorch_1595629427478/work/torch/csrc/utils/python_arg_parser.cpp:766.)
  boundary = alpha[b, h, 0, 0].nonzero()[:, -1].min().item()
  9%|| 232/2703 [15:06<1:49:35,  2.66s/it]Original utterance num: 2703
Removed 0 empty utterances

# This part is where I get errors
Traceback (most recent call last):
  File "/home/jungwonchang/projects1/neural_sp/examples/librispeech/s5/../../../neural_sp/bin/asr/eval.py", line 247, in <module>
    main()
  File "/home/jungwonchang/projects1/neural_sp/examples/librispeech/s5/../../../neural_sp/bin/asr/eval.py", line 182, in main
    oracle=True)
  File "/mnt/data1/jungwonchang/projects/neural_sp/neural_sp/evaluators/wordpiece.py", line 85, in eval_wordpiece
    ensemble_models=models[1:] if len(models) > 1 else [])[0]
  File "/mnt/data1/jungwonchang/projects/neural_sp/neural_sp/models/seq2seq/speech2text.py", line 763, in decode
    ensmbl_eouts, ensmbl_elens, ensmbl_decs)
  File "/mnt/data1/jungwonchang/projects/neural_sp/neural_sp/models/seq2seq/decoders/transformer.py", line 896, in beam_search
    rightmost_frame = max(0, aws_last_success[0, :, 0].nonzero()[:, -1].max().item()) + 1
RuntimeError: operation does not have an identity.
  9%|| 232/2703 [15:07<2:41:09,  3.91s/it]

the configuration I used was
conf/asr/mma/streaming/lc_transformer_mma_subsample8_ma4H_ca4H_w16_from4L_64_128_64.yaml

Also, I found out in the decode.log that the streamable feature for my model was False

2021-01-11 14:29:08,749 neural_sp.models.seq2seq.decoders.transformer line:888 INFO: streamable: False
2021-01-11 14:29:08,749 neural_sp.models.seq2seq.decoders.transformer line:889 INFO: streaming failed point: 1

Any idea or advice on this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant