This is the main website of the optimizationBenchmarking.org framework, a (dockerized)
Java 1.7 software designed to make the evaluation, benchmarking, and comparison of optimization or Machine Learning algorithms easier. This software is developed at the Institute of Applied Optimization (IAO) at Hefei Universit in Hefei, Anhui, China. This software can load log files created by (experiments with) an optimization or Machine Learning algorithm implementation, evaluate how the implementation has progressed over time, and compare its performance to other algorithms (or implementations) – over several different benchmark cases. It can create reports in LaTeX (ready for publication) or XHTML formats or export its findings in text files which may later be loaded by other applications. It does not make any requirements regarding the implementation of the algorithms under investigation (also not regarding the programming language) and does not require any programming for your side! It has a convenient GUI. A short set of introduction slides about this project can be found here.
We jointly organize the International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA) at the Tenth International Conference on Advanced Computational Intelligence (ICACI 2018) on March 29-31, 2018 in Xiamen, China. The paper submission deadline is November 15, 2017. Here is the CfP.
If you want to directly run our software and see the examples, you can use its dockerized version. Simply perform the following steps:
- Install Docker following the instructions for Linux, Windows, or MacOS.
- Open a normal terminal (Linux), the Docker Quickstart Terminal (Mac OS), or the Docker Toolbox Terminal (Windows).
- Type in
docker run -t -i -p 9999:8080/tcp optimizationbenchmarking/evaluator-guiand hit return. Only the first time you do that, it downloads our software. This may take some time, as the software is a 600 MB package. After the download, the software will start.
- Browse to
http://<dockerIP>:9999under Windows and Mac OS, where
dockerIPis the IP address of your Docker container. This address is displayed when you run the container. You can also obtain it with the command
docker-machine ip default.
- Enjoy the web-based GUI of our software, which looks quite similar to this web site.
- 08 Jan 2021 » Analysing Algorithmic Behaviour of Optimisation Heuristics Workshop (AABOH'21)
- 07 Jan 2021 » Good Benchmarking Practices for Evolutionary Computation (BENCHMARKING'21)
- 07 Dec 2020 » Special Session on Benchmarking of Computational Intelligence Algorithms (BOCIA'21)
- 04 Apr 2020 » Good Benchmarking Practices for Evolutionary Computation (BENCHMARK@PPSN) Workshop
- 14 Jan 2020 » Good Benchmarking Practices for Evolutionary Computation (BENCHMARK@GECCO) Workshop
- 10 Sep 2019 » An Implementation in R is available
- 12 Jan 2019 » Black-Box Discrete Optimization Benchmarking (BB-DOB@GECCO) Workshop
- 05 May 2018 » Black-Box Discrete Optimization Benchmarking (BB-DOB@PPSN) Workshop
- 01 Jan 2018 » Black-Box Discrete Optimization Benchmarking (BB-DOB@GECCO) Workshop
- 01 Oct 2017 » International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA)
- 27 Jul 2017 » Research Talk
- 17 Feb 2017 » Release 0.8.9: Beta
- 09 Sep 2016 » Talks in Germany
The optimizationBenchmarking.org framework prescribes the following work flow, which is discussed in more detail in this set of slides:
- Algorithm Implementation: You implement your algorithm. Do it in a way so that you can generate log files containing rows such as (
best solution quality so far) for each run (execution) of your algorithm. You are free to use any programming language and run it in any environment you want. We don’t care about that, we just want the text files you have generated.
- Choose Benchmark Instances: Choose a set of (well-known) problem instances to apply your algorithm to.
- Experiments: Well, run your algorithm, i.e., apply it a few times to each benchmark instance. You get the log files. Actually, you may want to do this several times with different parameter settings of your algorithm. Or maybe for different algorithms, so you have comparison data.
- Use Evaluator: Now, you can use our evaluator component to find our how good your method works! For this, you can define the dimensions you have measured (such as runtime and solution quality), the features of your benchmark instances (such as number of cities in a Traveling Salesman Problem or the scale and symmetry of a numerical problem), the parameter settings of your algorithm (such as population size of an EA), the information you want to get (ECDF? performance over time?), and how you want to get it (LaTeX, optimized for IEEE Transactions, ACM, or Springer LNCS? or maybe XHTML for the web?). Our evaluator will create the report with the desired information in the desired format.
- By interpreting the report and advanced statistics presented to you, you can get a deeper insight into your algorithm’s performance as well as into the features and hardness of the benchmark instances you used. You can also directly use building blocks from the generated reports in your publications.