Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add automatic benchmarking #54

Open
nickrobinson251 opened this issue Nov 14, 2021 · 0 comments
Open

Add automatic benchmarking #54

nickrobinson251 opened this issue Nov 14, 2021 · 0 comments
Labels
idea needs some investigation before we decide

Comments

@nickrobinson251
Copy link
Owner

nickrobinson251 commented Nov 14, 2021

Related to #52 and #53

Benchmarks for Julia packages are kind of annoying, so no one seems to do them... but we could figure it out

These should compare the performance of newer (or proposed) versions of this package to earlier versions. One possiblity would be to do comparisons to a fixed baseline so we can see the trend over time, another would be to alway compare to master / the previous version so we can make clearer pairwise comparison.

In either case we'd probably want to run both the "old" (baseline) and "new" versions to help reduce the noise from harware/OS differences (compared to just running the "new" version and comparing to the performance of the previous run of "old", i.e. always re-run "old" so it's on the same set-up as "new").

And if using Github Actions to run benachmarks, relative comparisons are probably the way to go.

See https://labs.quansight.org/blog/2021/08/github-actions-benchmarks

(As with comparing to other libraries #53, we may want to test performance on the files available in https://github.com/NREL-SIIP/PowerSystemsTestData/)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
idea needs some investigation before we decide
Projects
None yet
Development

No branches or pull requests

1 participant