Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to train/decode on reverberant speech? #251

Open
kevinmchu opened this issue Jan 8, 2021 · 1 comment
Open

How to train/decode on reverberant speech? #251

kevinmchu opened this issue Jan 8, 2021 · 1 comment

Comments

@kevinmchu
Copy link

I'd like to train a model on reverberant speech using the alignments generated from the corresponding anechoic data. Currently, I'm doing something similar to TIMIT_joint_training_liGRU_fbank.cfg, where I am using the reverberant TIMIT recipe to extract the features and the anechoic recipe for lab_folder and lab_graph. I noticed that decode_dnn.sh uses the lab_graph to generate the lattice rather than the graph constructed from the reverberant acoustic model.

What is the easiest way to specify using the anechoic alignments and reverberant graph?

@kevinmchu
Copy link
Author

I just wanted to follow up an ask if anyone has suggestions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant