Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support required for fine tuning cache aware streaming model #9027

Open
rkchamp25 opened this issue Apr 24, 2024 · 0 comments
Open

Support required for fine tuning cache aware streaming model #9027

rkchamp25 opened this issue Apr 24, 2024 · 0 comments

Comments

@rkchamp25
Copy link

rkchamp25 commented Apr 24, 2024

Hi
I want to fine tune "stt_en_fastconformer_hybrid_large_streaming_multi" model on my custom data.
I want to know some best practices that we can follow to fine tune tune cache aware streaming model.

  1. I am using audio of fixed length (2s). Is this good? Can I have audios' of different lengths? Total duration of audio required to finetune on a dataset of different domain (Medical Data)?
  2. Which tokenizer to use? Should we finetune using custom tokenizer which will be created with new data or should we fine tune using the default tokenizer and just with new audio?
  3. How can I make this model work with a different language? Can I fine tune this model directly on audio of different language for eg Spansh audio? Or how can we use this on different language?
  4. How to resume training for this model because I cannot train in one go? If I finetune using NeMo/examples/asr/speech_to_text_finetune.py?
  5. Should I use speech_to_text_finetune.py or speech_to_text_hybrid_rnnt_ctc_bpe.py, I want to try out with old vocabulary as well as new one and I want to stop and continue training multiple times.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant