Skip to content

barryvanveen/ab-runner

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AB runner

An opinionated Apache Benchmark (ab) runner and result plotter.

Comparison boxplot example

Purpose

Running ab once can give unreliable results. Maybe your server is doing other work or your machine running the test is busy. To circumvent that issue, this script can take multiple measurements with a wait time in between.

Contents

Installation

Requirements:

To install:

  • Clone this repository.
  • Run npm install

Quick start

./abrunner.js measure --help
./abrunner.js compare --help

Measure

A typical run can be started like this:

./abrunner.js measure -u https://localhost.test/ -o results/foo

This used the default settings. It will run 10 measurements of 500 requests with 10 concurrency. Between each measurement it will wait for 5 minutes. The results will be stored in ./results/foo.

For more advanced options, read the advanced measure docs.

Results

Running this command will create a bunch of outputs:

  • iteration*.dat files contain the ab raw measurements
  • iteration*.out files contain the ab output (that is normally outputted in the terminal)
  • combined.dat contains all combined measurements
  • combined.stats contain some statistics collected from the combined measurements
  • measure.png contains a plot with which you can visually inspect the response times of the individual runs and everything combined
  • measure.p is the Gnuplot script used to create above plot

Compare

Compare any number of measurements you took before. The result will be a combined boxplot.

For each measurement you want to incorporate into the comparison, provide the combined.dat datafile (or another ab gnuplot output file) and an appropriate label.

./abrunner.js compare -i results/foo/combined.dat results/bar/combined.dat -l "Foo" "Bar" -o results/comparison

Results

Running this command will create a bunch of outputs:

  • run*.dat files are a copy of the input files
  • run*.stats files contain some statistics collected from the input file
  • compare.png contains a plot comparing all input files
  • compare.p is the Gnuplot script used to create above plot

The comparison plot will look something like this:

Example comparison plot

Troubleshooting

When running a new measurement, the iteration*.out file captures the ab output. Sometimes this contains an error that helps you pinpoint the problem.

If you run ab against a domain, make sure it ends with a /. If the url includes a path this should not be a problem.

The output.log file always contains always contains a list of the command input and all commands that were run. If somehow the input arguments are not parsed correctly, you should be able to spot that here.