You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Evaluation option is to "simulate" how well the model would perform in a scenario where you would use the model. Then, you can play with the filters and percentiles to see in which matches your model performs well (e.g It predicts with very high accuracy Home Wins when the odd 1 is betwen 1.30 and 1.60).
On the other hand, use the training option to check how well your model "fits" (learns) the training dataset. For instance, if the accuracy is too small, it means it has learnt correctly (Under-fitting). If the accuracy is too high, but the evaluation accuracy is small, it means that the model has learnt the training dataset so well that it fails to predict the outcomes of new unseen matches, which are similar to the training dataset (Over-fitting),
For example, if You train a Neural Network for 100 epochs and 5 hidden layers of 300-500 units each layer, you might notice a 100% percent accuracy in training set but only 30% on evaluation set, which means the model has overfitted.
What is the difference between evaluation and training option under evaluation sample?
The text was updated successfully, but these errors were encountered: