Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add prediction quality monitoring #1

Open
h-huss opened this issue Aug 16, 2018 · 0 comments
Open

Add prediction quality monitoring #1

h-huss opened this issue Aug 16, 2018 · 0 comments

Comments

@h-huss
Copy link
Contributor

h-huss commented Aug 16, 2018

An automatic assesment of the prediction quality could be a very helpful tool for further tuning. Ideally, predictions should be run for a number of repositories on each commit, similar to contineous integration. This could help to state whether a particular change is useful.

To get meaningful results, these predictions should be done on a large number of repositories with enough computing power. It might be hard to achieve this on Travis.

Furthermore, machine learning is a stochastic process - random fluctuations in result quality might overshadow the effects of changes.

The actual evaluation implementation was started in 4ac580e.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant