You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now its kind of annoying to ask for benchmark results as part of each pull-requests. Ideally, we'd have benchmarks run as part of the PR or on-command whenever someone wants it before merging a change.
This would have made the optimization PRs like #37 and #39 easier to merge in. Also, it reduces one more friction point for any future pull-requests.
Would you be willing to make the change?
Maybe
Additional Context
When we get this working, we should make sure benchmarks are only ever run as PR and not part of regular builds (where something running for 6~7 mins is painfully slow for development cycles).
The text was updated successfully, but these errors were encountered:
I can help with the GHA part of this (including a small tweak to cancel long-running jobs such as benchmarks when new commits are pushed to a PR. Since I'm not an expert in Java benchmarking and the tooling, e.g. which output to use, creating +/- comparisons, etc. it would be great if someone could point me in the right direction so I can create the PR.
Hmm, I agree in principle but I think people shouldn't send us PRs that have big performance penalties because that will never be OK for Ruler. So I'd like to have some mechanism to provide an incentive to make this clear to a contributor. Maybe it's enough to write it into our submission guidelines or template?
The default template does ask for benchmarks which we can keep as it. The action here should help ensure as folks push additional updates to their PR, the github actions are good enough.
On steps to implement this, I suspect it'll be like
What is your idea?
Right now its kind of annoying to ask for benchmark results as part of each pull-requests. Ideally, we'd have benchmarks run as part of the PR or on-command whenever someone wants it before merging a change.
This would have made the optimization PRs like #37 and #39 easier to merge in. Also, it reduces one more friction point for any future pull-requests.
Would you be willing to make the change?
Maybe
Additional Context
When we get this working, we should make sure benchmarks are only ever run as PR and not part of regular builds (where something running for 6~7 mins is painfully slow for development cycles).
The text was updated successfully, but these errors were encountered: