Optimization Benchmarking

View project on GitHub

Second Alpha Release: Version 0.8.1

28 Jun 2015

Less than one month after the initial release of the first alpha version, we are happy to announce the second alpha version of our optimizationBenchmarking.org evaluator.

The new release has some exciting new functionality (this is not hipster-buzzword-speak, I am truly a bit excited that this works):

  1. You can now transform input data for the supported diagrams with almost arbitrary functions. Let's say you want the x-axis of an ECDF diagram to not just represent your function evaluations (FEs) dimensions, but the logarithm of 1+FEs? Then just specify <cfg:parameter name="xAxis" value="lg(1+FEs)" /> in your evaluation.xml!
  2. You can use all numerical instance features and experiment parameters, as well as dimension limits in these expressions too! Let's say you have measured data for experiments with Traveling Salesman Problems (TSPs) and you also want to scale the above FEs by dividing them by the square of the feature n which denotes the number of cities in a benchmark instance. Then how about specifying <cfg:parameter name="xAxis" value="lg((1+FEs)/(n²))" />?
  3. Under the hood, the font support has been improved. When creating LaTeX output, we use the same fonts as LaTeX uses. However, these may not have glyphs for some unicode characters -- maybe they (e.g., cmr) do not support ². In order to deal with this, we use composite fonts which then render ² with a glyph from a platform-default font which has that gylph. Not beautiful, but for now it will do.
  4. We now actually print some form of descriptive text into our reports. There still is quite some way to go to get good text, but we are moving forward.

Although we are still not near our vision of automated interpretation of experimental results in the domains of Machine Learning and optimization, release 0.8.1 marks a step forward. It allows you to explore your experiment data much more freely and hopefully to find interesting relationships between your algorithm's performance, its parameters, and the features of your benchmark instances.

The slides describing the evaluator together with some examples have been updated as well.

Downloads

Besides the downloads from our Maven repository, you can also download this version of the command-line tool from GitHub (where only the sources are provided).

Maven POM

The optimizationBenchmarking.org framework can also be used as library inside your own software. If you build with Maven, then you can use version 0.8.1 of the optimizationBenchmarking.org framework as external dependency by including the following information in your Maven POM.

<repositories>
  <repository>
    <id>optimizationBenchmarking</id>
    <url>http://optimizationbenchmarking.github.io/optimizationBenchmarking/repo/</url>
  </repository>
</repositories>
<dependencies>
  <dependency>
    <groupId>optimizationBenchmarking.org</groupId>
    <artifactId>optimizationBenchmarking</artifactId>
    <version>0.8.1</version>
  </dependency>
</dependencies>

[home] • [downloads] • [project versions] • [atom feed] • [RSS feed]