Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Calculating Loss and Accuracy on Validation Data? #266

Open
fukudatppei opened this issue Jan 28, 2024 · 2 comments
Open

Support for Calculating Loss and Accuracy on Validation Data? #266

fukudatppei opened this issue Jan 28, 2024 · 2 comments
Labels
good first issue Good for newcomers

Comments

@fukudatppei
Copy link

I'm currently in the process of training a model and have been tracking the loss and accuracy metrics.
However, I've noticed that while I can calculate these metrics for the training data, there isn't a straightforward way to calculate the loss and accuracy for the validation data within the current workflow.

Is there any plan to add support for computing these metrics on validation data in the near future? This feature would be extremely helpful for better evaluating model performance during the development process.

Thank you for considering this request.

@JiJiJiang
Copy link
Collaborator

Thank you for your question!
Actually when we just started to create and maintain WeSpeaker, we have considered whether to add the validation process during training. We found that the model with the minimal validation loss or lowest EER on the dev set was not bound to be the best model on the test set. The models trained with enough steps could generalize well on the unseen data. So in WeSpeaker, we only guarantee enough epochs are trained and average the last 10 epochs for robustness.
If you are not sure whether the model converges or not, you can check the loss tendency or just simply train more epochs.

@JiJiJiang JiJiJiang added the good first issue Good for newcomers label Jan 29, 2024
@wsstriving
Copy link
Collaborator

Is there any plan to add support for computing these metrics on validation data in the near future? This feature would be extremely helpful for better evaluating model performance during the development process.

We will try to support this or EER based validation in the future

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

3 participants