Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to inference with s4 decoder? #5700

Open
Haoxiang-Hou opened this issue Mar 13, 2024 · 1 comment
Open

How to inference with s4 decoder? #5700

Haoxiang-Hou opened this issue Mar 13, 2024 · 1 comment
Labels
Question Question

Comments

@Haoxiang-Hou
Copy link

When I use s4 decoder to train in Librispeech, asr1, the loss is very well.
However, when I inference with s4 decoder, the WER is very bad. And the inference beamsearch CER is much bigger than training CER and CER-CTC. It is strange.
image
When I use s4 decoder to train on Librispeech_clean_100, asr1, the valid CER is 0.076, and ctc CER is 0.086.
image
When I use s4 decoder to inference, the beamsearch CER of dev_clean is 16.7%, even worse than training.

@Haoxiang-Hou Haoxiang-Hou added the Question Question label Mar 13, 2024
@m-koichi
Copy link
Contributor

Hi, thanks for your report.
I confirmed that I was able to run the S4 decoder training and inference with the latest commit successfully.
Could you share your training and inference configurations?
Also, could you check the Transformer decoder inference as well?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Question Question
Projects
None yet
Development

No branches or pull requests

2 participants