Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhancements to benchmarking #146

Open
adiwajshing opened this issue Apr 2, 2020 · 5 comments
Open

Enhancements to benchmarking #146

adiwajshing opened this issue Apr 2, 2020 · 5 comments

Comments

@adiwajshing
Copy link

Hi,

I was thinking of adding some features & simplifying the benchmarking tool into a single line usage command line interface.

For example:

  1. ‘mlpack-benchmark -ann -mlpack -github=#1123123’
  2. ‘mlpack-benchmark -ann -shogun -github=#522323’
  3. ‘mlpack-benchmark -ann -mlpack'

The following should be the parameters to this system:

  1. The algorithm (-ann, -kmeans, -cnn)
  2. The ML library (-mlpack, -shogun, -scikit)
  3. GitHub commit (optional)
  4. Local path to uncommitted library (-local=/some/path)
  5. Which data sets to train on (-datasets=wine,iris)

The output should be the time it took to train on each data set, error rate, more specific
output on the algorithm itself (MSE, avg. time per epoch etc.)

This enhancement should also allow users to specify which commit of the library they wanna run the benchmark test on. Alternatively, they could specify a local commit to bench mark, to easily test uncommitted changes. Moreover, this program will automatically download & build the source of the library if required.

@rcurtin
Copy link
Member

rcurtin commented Apr 2, 2020

This seems like it would be an extremely significant refactoring. I don't have a problem with it (it sounds like a cool idea to me!), I just want to point out that it seems likely to take many weeks of hard work to make that work correctly.

@zoq
Copy link
Member

zoq commented Apr 2, 2020

Agreed nice idea, I think some parts are already there, but they have to be put together in the right way.

@adiwajshing
Copy link
Author

@rcurtin @zoq I'm willing to put the work in, it should actually be quite exciting! I had trouble benchmarking everything when I was working on this and so, figured a uniform & simply way to benchmark ML algorithms would go a long way in helping everybody who wants to test out new changes etc.

Also, I had proposed this in my GSOC application, I hope it's okay if I start this now?

@rcurtin
Copy link
Member

rcurtin commented Apr 6, 2020

Sure, feel free, just be aware that it might be a while until any of us are able to review it. :)

@adiwajshing
Copy link
Author

No worries, it'll take a while to finish anyway!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants