Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Differences between Evaluation Samples: Evaluation and Training mode #61

Open
keith0811 opened this issue Feb 4, 2024 · 1 comment
Open

Comments

@keith0811
Copy link

What is the difference between evaluation and training option under evaluation sample?

@kochlisGit
Copy link
Owner

Evaluation option is to "simulate" how well the model would perform in a scenario where you would use the model. Then, you can play with the filters and percentiles to see in which matches your model performs well (e.g It predicts with very high accuracy Home Wins when the odd 1 is betwen 1.30 and 1.60).

On the other hand, use the training option to check how well your model "fits" (learns) the training dataset. For instance, if the accuracy is too small, it means it has learnt correctly (Under-fitting). If the accuracy is too high, but the evaluation accuracy is small, it means that the model has learnt the training dataset so well that it fails to predict the outcomes of new unseen matches, which are similar to the training dataset (Over-fitting),

For example, if You train a Neural Network for 100 epochs and 5 hidden layers of 300-500 units each layer, you might notice a 100% percent accuracy in training set but only 30% on evaluation set, which means the model has overfitted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants