Skip to content

Releases: optimizationBenchmarking/documentation-intro-slides

Presentations in Kassel and Görlitz

15 Sep 23:08
Compare
Choose a tag to compare

These are the slides of the talk "optimizationBenchmarking.org: An Introduction" and the demo example that I presented at

  1. 2016-09-09, Hochschule Zittau/Görlitz, Enterprise Application Development Group, Görlitz, Sachsen, Germany
  2. 2016-09-08, University of Kassel, FB16 - Elektrotechnik/Informatik, Distributed Systems - Verteilte Systeme, Kassel, Hessen, Germany

1. Contained Files

  1. intro-slides.pdf is the set of slides that I presented in Wuhan.
  2. data.zip is the data set that I used in the demo. This is basically the MAX-SAT example that you can automatically download with the evaluator GUI.
  3. evaluatorGui.jar is the version of ou
    data.zip
    r software that I used for the demo. It is an internal release candidate for version 0.8.8 of the evaluator GUI.
  4. example-report.pdf is the report pdf produced by the evaluator in our demo. It was generated with style IEEEtran, i.e., with a LaTeX setup suitable for IEEE Transactions.
  5. example-report-sources.zip contains all the sources of the above report, i.e., is the full output of the evaluator.

Abstract of "optimizationBenchmarking.org: An Introduction"

Optimization algorithms have become a standard tool in many application areas such as management, logistics, engineering, design, chemistry, and medicine. They provide close-to-optimal approximate solutions for computationally hard problems within feasible time. This field has grown and evolved for the past 50 years and has several top-level journals dedicated to it. Research in optimization is focused on reducing the algorithm runtime and increasing the result quality. For such research to succeed and publications to have true impact on the real world, we need to be able to

  • analyze the performance of an algorithm, to
  • analyze the influence of different features of an optimization problem on its hardness, and to
  • compare the performance of different algorithms in a fair and sound fashion.

Many optimization methods are anytime algorithms, meaning that they start with a (usually bad) guess about the solution and step-by-step improve their approximation quality. All evolutionary algorithms, all local search algorithms (such as Simulated Annealing and Tabu Search), all swarm intelligence methods for optimization (such as ant colony and particle swarm optimization), CMA-ES and memetic algorithms, but also several exact and deterministic methods such as branch and bound belong into this class, just to name a few.

The comparison and evaluation of anytime algorithms must consider the whole runtime behavior of the algorithms in order to avoid misleading conclusions. Also, performance data has to be gathered from multiple independent runs on multiple different benchmark instances. It is easy to see that a thorough analysis and comparison of optimization algorithms is complicated and cumbersome. We present an open source software which can do this for you. You gather the data from your experiments, the software analyzes it. Our goal is to support researchers and practitioners as much as possible by automating the evaluation of experimental results. The software does not require any programming, just your benchmarking data. It imposes no limits, neither on the type of algorithms to be compared nor on the type of problem they are benchmarked on. Our software produces human-readable conclusions and reports in either XHTML or LaTeX format. You can freely select and configure different diagram types and group your data according to different aspects to get a better understanding of the behavior of your algorithm. Figures are styled for direct re-use in journals such as IEEE Transactions or conference proceedings such as LNCS. The software is dockerized, meaning that you can directly apply it with minimal installation effort.

We demonstrate the utility of this software on the example of the investigation of six primitive heuristics on the Maximum Satisfiability Problem (MAX-SAT). Similar examples are provided for download for numerical optimization and the Traveling Salesman Problem.

More information can be found at http://optimizationbenchmarking.github.io/.

References

  • Thomas Weise, Raymond Chiong, Ke Tang, Jörg Lässig, Shigeyoshi Tsutsui, Wenxiang Chen, Zbigniew Michalewicz, and Xin Yao. Benchmarking Optimization Algorithms: An Open Source Framework for the Traveling Salesman Problem. IEEE Computational Intelligence Magazine (CIM), 9(3):40-52, August 2014. Featured article and selected paper at the website of the IEEE Computational Intelligence Society (http://cis.ieee.org/).
    details / doi:10.1109/MCI.2014.2326101 / pdf
  • Thomas Weise, Yuezhong Wu, Raymond Chiong, Ke Tang, and Jörg Lässig. Global versus Local Search: The Impact of Population Sizes on Evolutionary Algorithm Performance. Journal of Global Optimization. accepted 12 February, 2016, published first online: 23 February, 2016.
    details / doi:10.1007/s10898-016-0417-5 / pdf

3. Short Bio of Presenter Dr. Thomas Weise

Dr. Thomas Weise received a Diplom-Informatiker from Chemnitz University of Technology, Germany in 2005 and a Ph.D. in Computer Science from the University of Kassel, Germany in 2009. In the same year, he joined the University of Science and Technology in China (USTC) as a PostDoc. He is now member of the USTC-Birmingham Joint Research Institute in Intelligent Computation and Its Applications (UBRI). Since 2011, he is Associate Professor at the same group.

Dr. Weise has made contributions to the fields of optimization, logistic planning, and evolutionary computation. He has authored or co-authored more than 80 publications in international journals and conferences. He is the author of the book Global Optimization Algorithms – Theory and Application, which has been cited more than 630 times according to Google Scholar. He is also the developer of the open source software optimizationBenchmarking. This software supports researchers in the fields of optimization and machine learning by mining performance information from large data sets or log files gathered from their experiments. The predecessor of this system, the TSP Suite, was also developed by him.

More information about Dr. Weise can be found on his personal website http://www.it-weise.de.

Presentations on 2016-05-30/31 in Wuhan

30 May 21:49
Compare
Choose a tag to compare

These are the slides of the talk "optimizationBenchmarking.org: An Introduction" and the demo example that I presented at

both of which are located in Wuhan.

1. Contained Files

This release consists of the following files:

  1. intro-slides.pdf is the set of slides that I presented in Wuhan.
  2. data.zip is the data set that I used in the demo. This is basically the MAX-SAT example that you can automatically download with the evaluator GUI.
  3. evaluatorGui.jar is the version of our software that I used for the demo. It is version 0.8.6 of the evaluator GUI.
  4. example-report.pdf is the report pdf produced by the evaluator in our demo. It was generated with style IEEEtran, i.e., with a LaTeX setup suitable for IEEE Transactions.
  5. example-report-sources.zip contains all the sources of the above report, i.e., is the full output of the evaluator.

2. Abstract of "optimizationBenchmarking.org: An Introduction"

Optimization algorithms have become a standard tool in many application areas such as management, logistics, engineering, design, chemistry, and medicine. They provide close-to-optimal approximate solutions for computationally hard problems within feasible time. This field has grown and evolved for the past 50 years and has several top-level journals dedicated to it. Research in optimization is focused on reducing the algorithm runtime and increasing the result quality. For such research to succeed and publications to have true impact on the real world, we need to be able to

  • analyze the performance of an algorithm, to
  • analyze the influence of different features of an optimization problem on its hardness, and to
  • compare the performance of different algorithms in a fair and sound fashion.

Many optimization methods are anytime algorithms, meaning that they start with a (usually bad) guess about the solution and step-by-step improve their approximation quality. All evolutionary algorithms, all local search algorithms (such as Simulated Annealing and Tabu Search), all swarm intelligence methods for optimization (such as ant colony and particle swarm optimization), CMA-ES and memetic algorithms, but also several exact and deterministic methods such as branch and bound belong into this class, just to name a few.

The comparison and evaluation of anytime algorithms must consider the whole runtime behavior of the algorithms in order to avoid misleading conclusions. Also, performance data has to be gathered from multiple independent runs on multiple different benchmark instances. It is easy to see that a thorough analysis and comparison of optimization algorithms is complicated and cumbersome. We present an open source software which can do this for you. You gather the data from your experiments, the software analyzes it. Our goal is to support researchers and practitioners as much as possible by automating the evaluation of experimental results. The software does not require any programming, just your benchmarking data. It imposes no limits, neither on the type of algorithms to be compared nor on the type of problem they are benchmarked on. Our software produces human-readable conclusions and reports in either XHTML or LaTeX format. You can freely select and configure different diagram types and group your data according to different aspects to get a better understanding of the behavior of your algorithm. Figures are styled for direct re-use in journals such as IEEE Transactions or conference proceedings such as LNCS. The software is dockerized, meaning that you can directly apply it with minimal installation effort.

More information can be found at http://optimizationbenchmarking.github.io/.

References

  • Thomas Weise, Raymond Chiong, Ke Tang, Jörg Lässig, Shigeyoshi Tsutsui, Wenxiang Chen, Zbigniew Michalewicz, and Xin Yao. Benchmarking Optimization Algorithms: An Open Source Framework for the Traveling Salesman Problem. IEEE Computational Intelligence Magazine (CIM), 9(3):40-52, August 2014. Featured article and selected paper at the website of the IEEE Computational Intelligence Society (http://cis.ieee.org/).
    details / doi:10.1109/MCI.2014.2326101 / pdf
  • Thomas Weise, Yuezhong Wu, Raymond Chiong, Ke Tang, and Jörg Lässig. Global versus Local Search: The Impact of Population Sizes on Evolutionary Algorithm Performance. Journal of Global Optimization. accepted 12 February, 2016, published first online: 23 February, 2016.
    details / doi:10.1007/s10898-016-0417-5 / pdf

3. Short Bio of Presenter Dr. Thomas Weise

Dr. Thomas Weise received a Diplom-Informatiker from Chemnitz University of Technology, Germany in 2005 and a Ph.D. in Computer Science from the University of Kassel, Germany in 2009. In the same year, he joined the University of Science and Technology in China (USTC) as a PostDoc. He is now member of the USTC-Birmingham Joint Research Institute in Intelligent Computation and Its Applications (UBRI). Since 2011, he is Associate Professor at the same group.

Dr. Weise has made contributions to the fields of optimization, logistic planning, and evolutionary computation. He has authored or co-authored more than 80 publications in international journals and conferences. He is the author of the book Global Optimization Algorithms – Theory and Application, which has been cited more than 600 times according to Google Scholar. He is also the developer of the open source software optimizationBenchmarking. This software supports researchers in the fields of optimization and machine learning by mining performance information from large data sets or log files gathered from their experiments. The predecessor of this system, the TSP Suite, was also developed by him.

More information about Dr. Weise can be found on his personal website http://www.it-weise.de.