Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve continuous benchmarking with Bencher #769

Open
epompeii opened this issue Apr 20, 2024 · 3 comments
Open

Improve continuous benchmarking with Bencher #769

epompeii opened this issue Apr 20, 2024 · 3 comments
Assignees

Comments

@epompeii
Copy link

Hey fastcrypto team!
I came across your white paper, and I think you all have built a pretty nice continuous benchmarking site.

I just wanted to reach out because I'm the maintainer of an open source continuous benchmarking tool called Bencher: https://github.com/bencherdev/bencher

It looks like you all currently only benchmark releases. Though, I may be missing something.
Bencher would allow you to track your benchmarks over time, compare the performance of pull requests, and catch performance regressions before they get merged.

I would be more than happy to answer any questions that you all may have!

@jonas-lj
Copy link
Contributor

jonas-lj commented Apr 29, 2024

Thanks for reaching out, @epompeii! Yea, we don't run the benchmarks on all PRs because it takes a long time to run them. But it sounds very interesting with better reporting and comparison over time.

I'm curious to hear what your experience is doing benchmarks online as part of CI? For us, performance varies quite a lot, making it a bit difficult to detect small changes in performance.

@epompeii
Copy link
Author

epompeii commented Apr 29, 2024

Yea, we don't run the benchmarks on all PRs because it takes a long time to run them.

Yeah, this can definitely be a blocker. I think the most common thing I've seen is only running a subset of benchmarks on PRs to at least cover the critical path.

I'm curious to hear what your experience is doing benchmarks online as part of CI?

There are a few ways to handle this. In order of most to least effective:

  1. Use a bare metal runner ($100+/month)
  2. Use an instruction count based benchmarking harness (in addition to a wall clock based benchmarking harness)
  3. Use statistical continuous benchmarking on shared CI runners
  4. Use relative continuous benchmarking on shared CI runners
  5. Run a nightly benchmarking job that then does a git bisect to find performance regressions

@jonas-lj
Copy link
Contributor

jonas-lj commented May 1, 2024

Thanks, that's

Yea, we don't run the benchmarks on all PRs because it takes a long time to run them.

Yeah, this can definitely be a blocker. I think the most common thing I've seen is only running a subset of benchmarks on PRs to at least cover the critical path.

I'm curious to hear what your experience is doing benchmarks online as part of CI?

There are a few ways to handle this. In order of most to least effective:

  1. Use a bare metal runner ($100+/month)
  2. Use an instruction count based benchmarking harness (in addition to a wall clock based benchmarking harness)
  3. Use statistical continuous benchmarking on shared CI runners
  4. Use relative continuous benchmarking on shared CI runners
  5. Run a nightly benchmarking job that then does a git bisect to find performance regressions

Thanks! That's great advise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants