You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I use s4 decoder to train in Librispeech, asr1, the loss is very well.
However, when I inference with s4 decoder, the WER is very bad. And the inference beamsearch CER is much bigger than training CER and CER-CTC. It is strange.
When I use s4 decoder to train on Librispeech_clean_100, asr1, the valid CER is 0.076, and ctc CER is 0.086.
When I use s4 decoder to inference, the beamsearch CER of dev_clean is 16.7%, even worse than training.
The text was updated successfully, but these errors were encountered:
Hi, thanks for your report.
I confirmed that I was able to run the S4 decoder training and inference with the latest commit successfully.
Could you share your training and inference configurations?
Also, could you check the Transformer decoder inference as well?
When I use s4 decoder to train in Librispeech, asr1, the loss is very well.
However, when I inference with s4 decoder, the WER is very bad. And the inference beamsearch CER is much bigger than training CER and CER-CTC. It is strange.
When I use s4 decoder to train on Librispeech_clean_100, asr1, the valid CER is 0.076, and ctc CER is 0.086.
When I use s4 decoder to inference, the beamsearch CER of dev_clean is 16.7%, even worse than training.
The text was updated successfully, but these errors were encountered: