Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use an easily reproducible server stack as basis for the benchmarks #41

Open
motin opened this issue Dec 11, 2015 · 2 comments
Open

Use an easily reproducible server stack as basis for the benchmarks #41

motin opened this issue Dec 11, 2015 · 2 comments

Comments

@motin
Copy link

motin commented Dec 11, 2015

Currently it takes hours to reproduce the same stack that are used to produce the benchmark results. This demotivates others from reproducing the benchmarks on their own servers / workstations, causing the results to be biased towards a particular benchmarking environment.

By sharing underlying software stacks, the benchmark results vary only according to the host machine's hardware specs and differing code implementations.

Also, it simplifies contribution to php-framework-benchmark, since it makes it easier to test PR:s locally before submitting them.

@kenjis
Copy link
Owner

kenjis commented Dec 11, 2015

I recommend all of you run benchmarks on your production equivalent environment.
It is only true benchmarks for you. Other benchmarks are not.

Why do you take so much time to install this repo into your environment?

@motin
Copy link
Author

motin commented Dec 11, 2015

I recommend all of you run benchmarks on your production equivalent environment.
It is only true benchmarks for you. Other benchmarks are not.

My production environment is Docker in AWS instances and it is trivial to run the benchmarks as well when dealing with Docker stacks. I'll do it and report the results.

However, the point of these benchmarks should not be comparing different operating systems imo, it should be to focus on the different implementations of the different frameworks and see which ones are slower and what not.

By using easily reproducible environments it is possible to test the frameworks quickly across various different stacks, which gives insight into which frameworks are better suited for your particular stack.

It is even possible to reproduce your own production environment in a docker-stack (with regard to software versions and config) and thus directly be able to compare it's software/config against the other supplied stacks.

Why do you take so much time to install this repo into your environment?

It takes time a lot of time to set up the environment in a way that all frameworks and all shell scripts run properly. And the process needs to be re-done for every environment.

Even if it takes only 30 minutes to set up your environment so that it runs all frameworks, it is much less attractive than if there is a easily reproducible environment available to run them within that takes mere minutes and no advanced server set-up knowhow to do. Take the perspective of someone looking into contributing to this repo. Running the actual benchmarks is of course vital in order to be able to do so.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants