Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Comparison chart with bars #22

Open
orangy opened this issue Jan 22, 2019 · 7 comments
Open

Comparison chart with bars #22

orangy opened this issue Jan 22, 2019 · 7 comments

Comments

@orangy
Copy link

orangy commented Jan 22, 2019

Thanks for such a nice project! I'm using JMH for quite a while and only just now found it :)

I'm trying to use visualizer to compare values for different flavors of the same code, not several runs in the optimization process. For example, an http server (ktor.io) using different engines such as Netty, Jetty or Coroutines. Another example are multiplatform benchmarks for Kotlin JS, Native & JVM.

It would be nice to have a different comparison rendering which would show differently colored bars with legend for the same test (with a vague, may be configurable definition of "same"). Graph bar as it is now makes no sense for such comparisons.

@jzillmann
Copy link
Owner

Hey @orangy , do you have any concrete JMH JSON output you could share ?

@orangy
Copy link
Author

orangy commented Jul 3, 2019

We are trying to use it with multiplatform Kotlin Benchmarks: https://github.com/kotlin/kotlinx-benchmark
There are sample projects and they generate JMH-compatible JSON. You can easily generate your own.

@jzillmann
Copy link
Owner

So i guess you would have different json files (e.g. one for Kotlin JS, Native & JVM, etc..) and the you wanne compare same benchmarks across the files !?

@DavidGregory084
Copy link

DavidGregory084 commented May 23, 2022

@jzillmann I'd also like to thank you for this project, it's great!

I have a similar use-case - in this PR I'm developing a hash map implementation that uses a different technique for hashing keys than the Scala collections implementation, so my benchmarks are basically comparing the different methods of the two implementations to see how competitive my implementation is vs the Scala collections one.

I have benchmark methods like HashMapBench#scalaMapConcat and HashMapBench#hashMapConcat which benchmark the concat method from both implementations and I want to compare the results.

You can find an example of the kind of JMH results I want to compare here (albeit these ones are CSV not JSON).

I'm currently at @opencastsoftware working on open source and although I'm not a great JS developer personally I think there are people on my team who would love to have a go at implementing this?

@DavidGregory084
Copy link

BTW you can see the approach I took to generate the benchmark charts from that PR in this gist, basically I used a regex to extract the benchmarks containing scalaMap and hashMap in their name and used it to match up benchmarks by their name suffix.
Perhaps a user-friendly way to do that would be to ask users to provide multiple "benchmark series names" which were part of the benchmark name?

@jzillmann
Copy link
Owner

Hey @DavidGregory084 I think your request is different from the original purpose of the ticket (which I think is more about aggregating multiple result files to a single run instead of multiple runs).
If I understand you correctly you have a single result file but want to bundle different benchmarks together (like all contains, all concats, etc...).
So in theory you could achieve that by making separate method you test an own benchmark class !?

As it comes to code modification, I'm not using the project right now so I won't invest much time into it.
If you have people who are interested, wonderful. Just now that the code quality isn't the greatest and there are sadly no tests. So we would have to validate manually if they break things.

At one point I also considered having a kind of configuration file (where you associate, include, exclude benchmarks). So if the project would be adapted and released as npm module, people could have a configuration file in their project and generate the graphs more specifically...

@mbosecke
Copy link

mbosecke commented Mar 4, 2024

I'm not sure if this falls within the same request but my case would be satisfied by just being able to have consistent x-axis whenever I'm uploading a run. Run now I'm changing my code and re-running the same benchmark, however, the bar charts can't be visually compared between runs because they all use a different dynamically-generated x-axis. If they all had the same x-axis, I could display all runs and see how one run was an improvement over a previous run because the bars are visually longer.

A user-configurable option to specify the x-axis maximum would work well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants