Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Validation loss become higher after 20 hours training time #88

Open
misbullah opened this issue Jan 3, 2018 · 3 comments
Open

Validation loss become higher after 20 hours training time #88

misbullah opened this issue Jan 3, 2018 · 3 comments

Comments

@misbullah
Copy link

Hi,
Yesterday, I tried to run speech.yml from examples.
In the beginning, the validation loss is about 200.xxx, but after training for 20 hours, the validation become higher. Now, it is about 630.xxx.

Is the any problem with this training process?

Because, I checked the graph on deepgram blog: http://blog.deepgram.com/how-to-train-baidus-deepspeech-model-with-kur/, the validation loss become smaller for more iteration.

Thanks.

@scottstephenson
Copy link
Collaborator

Does looking at #6 help? Seeing your loss plot would help too.

@misbullah
Copy link
Author

Hi @scottstephenson,

Yes, I checked it already. Simple question, how to create loss plot from deepgram training process?
Is there any documentation for it?

I checked that the kur use tensorflow for backend. Does it also support wrap-ctc like implemented in the following git:
https://github.com/mozilla/DeepSpeech

Thanks.

@scottstephenson
Copy link
Collaborator

Have a look at the tutorial: https://kur.deepgram.com/tutorial.html

The tensorflow backend uses the Tensorflow CTC implementation, PyTorch and Theano use warp CTC.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants