Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds automatic display of bechmark results #90

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

aa25desh
Copy link
Contributor

Look at issue #47

With this commit I am getting required results locally.

As relative precision is stored in RELPREC dictionary, which is not generated by PkgBenchmark. we would not get it automatically.
@mforets Is there way to get relative precision?

Still there might be few things to be done by repository owner

  • Make sure to fix the version number in your CI setup.
  • Repo has permission to use Github on Action

@lbenet
Copy link
Member

lbenet commented Sep 15, 2020

Thanks for the contribution @aa25desh

What do you mean by "relative precision"?

@aa25desh
Copy link
Contributor Author

This function measures the relative precision of the result in a more informative way than
taking the scalar overestimation because it evaluates the precision of the lower and the
upper range bounds separately.
look here.

@mforets
Copy link
Contributor

mforets commented Sep 15, 2020

@mforets Is there way to get relative precision?

i see, you'd like to store additional columns in the results table. i had a quick look and i'm not sure that feature is available in the user interface of PkgBenchmark.jl

@aa25desh
Copy link
Contributor Author

@lbenet should we find some workaround or is it ok for now?

Copy link
Member

@lbenet lbenet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added few general remarks. I guess the most important is how this works. Can you show it? Perhaps @dpsanders has other comments on this PR.

Comment on lines +7 to +11
# Only trigger the benchmark job when you add `run benchmark` label to the PR
jobs:
Benchmark:
runs-on: ubuntu-latest
if: contains(github.event.pull_request.labels.*.name, 'run benchmark')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you show/describe how this works? It seems to me from these lines that you need to add run benchmark somewhere to see this working.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you show/describe how this works?

BenchmarkCI crates Benchmark-result branch and saves results there.
For example look at this PR, github-actions bot comments the results there, and saves them in Benchmark-result branch here (In this repositories case we need not mention run benchmark explicitly)

There are many ways one can trigger Benchmarking. In our case when some one adds run benchmark label to the comment of PR. We can also have configuration that trigger benchmarking for each PR without mentioning it explicitly.

Comment on lines +7 to +8
/.benchmarkci
/benchmark/*.json
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these files appearing? How are they generated?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Locally these files appear for me.
They are generated while saving the results of benchmark to the file look here

@@ -0,0 +1,488 @@
# This file is machine-generated - editing it directly is not advised
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file should be excluded from the PR; it is not needed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have been following BenchmarkCIExample.jl there they have kept Manifest.toml in the benchmarking folder.
But now I see the BenchmarkCI.jl working without Manifest.toml, looks like we don't need it.

@lbenet lbenet changed the title Adds - Automatic display of bechmark results Ads automatic display of bechmark results Sep 17, 2020
@lbenet lbenet changed the title Ads automatic display of bechmark results Adds automatic display of bechmark results Sep 17, 2020
@lbenet
Copy link
Member

lbenet commented Oct 15, 2020

I think this PR is quite interesting; yet, I am not able to see any results; while I see that something is being triggered by github actions, there are no details since it is simply skipped.

@dpsanders @mforets What do you think about this? Shall we give it a try?

@mforets
Copy link
Contributor

mforets commented Oct 16, 2020

from https://github.com/JuliaIntervals/TaylorModels.jl/pull/90/files#r490853817 we shall try run benchmark ? as a comment? i don't have previous experience with BenchmarkCI .

@aa25desh
Copy link
Contributor Author

we shall try run benchmark ? as a comment?

Yes as a comment in PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants