Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarks in CI [Feature] #106

Open
sfmig opened this issue Jul 20, 2023 · 1 comment
Open

Benchmarks in CI [Feature] #106

sfmig opened this issue Jul 20, 2023 · 1 comment
Labels
enhancement New feature or request

Comments

@sfmig
Copy link
Contributor

sfmig commented Jul 20, 2023

The basic benchmarks we have implemented are not part of CI yet. I need to do a bit of research on how we can do that , and what we should compare benchmarks to.

Can we use GitHub runners or do we need a dedicated SWC machine, even for the simpler benchmarks?

  • We may have issues with the benchmarks not being comparable if the GitHub runners we get have different specs

    • this blog post discusses a way around it with 'relative benchmarking' - fantastically explained
    • the idea is that we use the command asv continuous to run a side-by-side comparison of two commits in the same runner (continuous for continuous integration).
    • they follow this approach in napari and in scikit-image
  • In astropy they opt for dedicated machines:

    The benchmarks are run using airspeed velocity on dedicated physical machines belonging to members of the Astropy developer community.

  • The asv docs on machine tuning also seem relevant for this question

  • A third alternative could be to boot up a dedicated machine on the cloud everytime a benchmark needs to be run, as they describe in this post

@sfmig sfmig added the enhancement New feature or request label Jul 20, 2023
@adamltyson
Copy link
Member

adamltyson commented Jul 20, 2023

Can we use GitHub runners or do we need a dedicated SWC machine, even for the simpler benchmarks?

I think use GH runners as much as we can, but we have local runners for any long running tests.

A third alternative could be to boot up a dedicated machine on the cloud everytime a benchmark needs to be run, as they describe in this post

Local runners work well, and we can get more machines if needed (or spawn jobs on our internal cluster from the runner), so we should be able to have both:
a) Sufficient compute
b) Consistent benchmarks using the same physical machines

@willGraham01 willGraham01 transferred this issue from brainglobe/cellfinder-core Jan 3, 2024
@alessandrofelder alessandrofelder transferred this issue from brainglobe/cellfinder May 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: Backlog
Development

No branches or pull requests

2 participants