Continous benchmark #2876
Replies: 3 comments 2 replies
-
Related discussion #2767 |
Beta Was this translation helpful? Give feedback.
-
Yeah I wager mightily the results vary a lot if run on GitHub. All the cloud services I've used have odd variability in runtimes if you measure them. Although maybe it's not a big deal for this kind of thing. Anyway. I'm more worried about dependency creep than anything else. So 🤷♂️ maybe |
Beta Was this translation helpful? Give feedback.
-
Anyway. I think the best thing is still to have some separate program that is specifically for benchmarking stuff. Like in the tools/ folder some benchmark program that when run outputs benchmarks. That would be fine and could be run wherever. |
Beta Was this translation helpful? Give feedback.
-
Github actions supports continuous benchmark: https://github.com/marketplace/actions/continuous-benchmark
And it works with Catch2
I think this might be a cool thing to add. There have been a few PRs lately that improve performance. It would be nice if we could substantiate these claims with some graphs that show the evolution of benchmarks over commits and time.
This would potentially require rewriting some unit tests with Catch2, but I don't see that as a bad thing. It can be pulled in using cmake FetchContent, so it isn't a faffy dependency, and it would only be a dependency for unit tests, not the main library. The reporting is a bit nicer than dlib in my opinion anyway.
For example, i think tracking the performance of
dlib::matrix
,dlib::fft
, and others are super important. Now i don't know if the results vary depending on the runner. Like, do Github runners occasionally run on different servers with different specs? That could skew results. I don't know.Beta Was this translation helpful? Give feedback.
All reactions