Skip to content

Releases: optimizationBenchmarking/optimizationBenchmarking

Alpha Version with GUI

13 Sep 21:55
Compare
Choose a tag to compare
Pre-release

This version brings some improvements to the evaluator core, more module options, but most importantly: a graphical user interface (GUI)! It is a co-release of the core project, i.e., the command line tool, and the GUI. Of course, you can also access it via our central repository.

The GUI

Let us talk about the GUI first: The optimizationBenchmarking evaluator, since its first release, is available as command line tool, i.e., you start it from the shell and provide all information it needs in form of files. In order to make using our tool easier, we now provide a graphical user interface (GUI) in form of a locally-running, stand-alone web application.

Our evaluation process needs meta information about your experimental data, such as what your measurement dimensions are, what your benchmark instances are, and what the parameters of the algorithms that you have experimented are, as well as what kind of information you want the evaluator to get for you. The GUI allows you to specify these information by using convenient (HTML) forms, which are annotated with helpful hints. Furthermore, the GUI also allows you to run the evaluator itself and to download several example data sets into your workspace. You can also upload experimental results and download the results of your evaluation. It also comes with an included help. All of this should make it much easier for you to use our system.

The GUI has been written as a stand-alone web application based on the embedded Jetty server. This has a striking advantage: Computing high-level statistics and creating a report document and then compiling it with LaTeX may need some time if you have much experimental data, e.g., from several experiments on many benchmark instances. Now you can either run the system entirely on your local computer, patiently waiting until it has finished. Or you could start the GUI on a strong server (or, basically, any other computer) in your lab. In both cases, you access the GUI via your web browser in exactly the same way, but in the latter case, no computational load is created on your computer (only the server will sweat).

If you work in a research group, the server-based approach has the additional advantage that you can have once centralized repository to store all of your experimental results. This makes sharing of results throughout the group much easier. (In future versions of our system, we may even add support for this.) If you implement a suitable backup strategy for this repository, you will even gain more safety against the loss of precious experimental results.

Like the original command line application, the GUI comes as stand-alone jar, i.e., you do not need to install anything else. Just put the jar on your computer, start it, and you are done. It will even automatically open a browser and navigate it to the application.

Improvements to the Core

One visible improvement to the core is that diagrams which illustrate functions (such as the ECDF or the progress of a statistical parameter over time) now can either be plotted as-is
or as "ranking plots". In a ranking plot, instead of illustrating the actual values of the functions, we plot their rank. This can help you to distinguish visually similar values easily. A situation where this may come in handy is the following: Assume you want to compare optimization algorithms solving the TSP. They may start at really bad solutions, where the objective value may be 200 times as high than the optimum, i.e., f/f*=200. As time goes by, they may get very close to the optimum, 1≤f/f*≤1.001. If we have values as large as 200 and as small as 1.001 in one diagram, it will be virtually impossible to distinguish 1.001 from 1.002. In a ranking plot, that won't be a problem.

Besides this visual changes there have been a few minor improvements and bug fixes. The primary download is optimizationBenchmarkingGui-0.8.4-full.jar.

BugFix Release

14 Jul 12:38
Compare
Choose a tag to compare
BugFix Release Pre-release
Pre-release

This is a minor new release, the fourth alpha version of our optimizationBenchmarking.org evaluator.

You can download its binaries from our Maven repository, which also includes a stand-alone binary.

This version solves a long-standing problem with the LaTeX output. As you may know, the software can generate algorithm performance reports in LaTeX format and automatically compile them to pdf.

Since our evaluator software is quite versatile and its modules can generate series of figures with charts, I developed the LaTeX package figureSeries with floating figures which can break over multiple pages. This was needed, as we can have figures with arbitrarily many sub-figures and the software cannot determine how much space they will occupy in the final document, i.e., LaTeX must handle the layout. Unfortunately, my figureSeries had some bugs, e.g., would sometimes generate Float(s) lost errors during LaTeX compilation. This bug has been fixed in the new version 0.9.4 of that package. We now internally make use of the great cuted package in figureSeries, which turned out to work better than my own code. Thus, LaTeX document generation and compilation is now more robust.

There are also some improvements in the LaTeX tool chain that we use for compiling the documents automatically. Our evaluator can automatically detect whether and where LaTeX is installed on the system. Depending on the type of document you generate, it will try to configure a proper tool chain for compilation depending on what you have installed. The code for this has been cleaned up and consolidated, and we now also support LuaTeX (besides standard LaTeX, pdfTeX, and XeTeX).

All in all, nothing much changed on the surface, but you should be seeing less errors and the software became a bit more robust.

Third Alpha Release

05 Jul 14:03
Compare
Choose a tag to compare
Third Alpha Release Pre-release
Pre-release

This is a minor new release, the third alpha version of our optimizationBenchmarking.org evaluator.

You can download its binaries from our Maven repository, which includes a stand-alone binary.

The robustness of the software has been improved, in particular it no longer strictly requires a Java JDK, but can also run under a JRE. If the evaluator is executed under a JDK and loads experiment data, it will automatically generate and compile data container classes which are tailored to the kind of data being read. Under a JRE, we will no longer throw an exception and terminate, but instead use fall-back classes. These classes are both slower and require more memory than the tailored ones, but at least the system will work properly.

Second Alpha Release

28 Jun 11:26
Compare
Choose a tag to compare
Second Alpha Release Pre-release
Pre-release

This is the second Alpha version of our project. It contains some major changes compared to the first alpha version:

  1. You can now transform input data for the supported diagrams with almost arbitrary functions. Let's say you want the x-axis of an ECDF diagram to not just represent your function evaluations (FEs) dimensions, but the logarithm of 1+FEs? Then just specify <cfg:parameter name="xAxis" value="lg(1+FEs)" /> in your evaluation.xml!
  2. You can use all numerical instance features and experiment parameters, as well as dimension limits in these expressions too! Let's say you have measured data for experiments with Traveling Salesman Problems (TSPs) and you also want to scale the above FEs by dividing them by the square of the feature n which denotes the number of cities in a benchmark instance. Then how about specifying <cfg:parameter name="xAxis" value="lg((1+FEs)/(n²))" />?
  3. Under the hood, the font support has been improved. When creating LaTeX output, we use the same fonts as LaTeX uses. However, these may not have glyphs for some unicode characters -- maybe they (e.g., cmr) do not support ². In order to deal with this, we use composite fonts which then render ² with a glyph from a platform-default font which has that gylph. Not beautiful, but for now it will do.
  4. We now actually print some form of descriptive text into our reports. There still is quite some way to go to get good text, but we are moving forward.

These changes broke the compatibility of the settings for ECDF and Aggregate functions in the evaluation.xml files. I hope this will not happen often anymore, but we are still in an alpha phase of our system, so it could not be avoided. The new features are quite nice, so I think they are worth it.

The binaries of this version can be downloaded from our Maven Repository.

First Alpha Release

31 May 22:26
Compare
Choose a tag to compare
First Alpha Release Pre-release
Pre-release

This is the very first alpha release of the optimizationBenchmarking.org Evaluator.

There still likely are several errors in the code,some essential features are missing, and the algorithm performance reports contain no text (just figures). All in all, this software version is not yet productive ready. But it - sort of works - i.e., you can load performance data in CSV, BBOB, or TSP Suite format. You can generate reports in LaTeX for document classes such as IEEEtran, sig-alternate, or LLNCS. Or in XHTML. Or for export to other applications. So this version allows you to peek at what we are up to, what may eventually be possible.

We will use this version to publicize our activity and to, hopefully, get early feedback from colleagues and potential users.

Data formats defined in this release - in particular, the format for specifying evaluation processes - should not yet be considered as stable. They may be adapted once we implement more functionality and get a clearer impression what is good, design-wise.