Second Alpha Release: Version 0.8.1
Less than one month after the initial release of the first alpha version, we are happy to announce the second alpha version of our optimizationBenchmarking.org evaluator.
The new release has some exciting new functionality (this is not hipster-buzzword-speak, I am truly a bit excited that this works):
- You can now transform input data for the supported diagrams with almost arbitrary
functions. Let's say you want the x-axis of an ECDF diagram to not just represent
your function evaluations (
FEs
) dimensions, but the logarithm of1+FEs
? Then just specify<cfg:parameter name="xAxis" value="lg(1+FEs)" />
in yourevaluation.xml
! - You can use all numerical instance features and experiment parameters, as well
as dimension limits in these expressions too! Let's say you have measured data
for experiments with Traveling Salesman Problems (TSPs) and you also want to
scale the above FEs by dividing them by the square of the feature
n
which denotes the number of cities in a benchmark instance. Then how about specifying<cfg:parameter name="xAxis" value="lg((1+FEs)/(n²))" />
? - Under the hood, the font support has been improved. When creating LaTeX output,
we use the same fonts as LaTeX uses. However, these may not have glyphs for some
unicode characters -- maybe they (e.g.,
cmr
) do not support²
. In order to deal with this, we use composite fonts which then render²
with a glyph from a platform-default font which has that gylph. Not beautiful, but for now it will do. - We now actually print some form of descriptive text into our reports. There still is quite some way to go to get good text, but we are moving forward.
Although we are still not near our vision of automated interpretation of experimental results in the domains of Machine Learning and optimization, release 0.8.1 marks a step forward. It allows you to explore your experiment data much more freely and hopefully to find interesting relationships between your algorithm's performance, its parameters, and the features of your benchmark instances.
The slides describing the evaluator together with some examples have been updated as well.
Downloads
- Stand-Alone Command-Line Tool [md5] [sha1] — This includes all necessary dependencies. You can evaluate your experiments from the command line using this tool.
Command-Line Tool without Dependencies [md5] [sha1] — This is just the raw code of the core command-line tool. You will need the required third-party libraries to be in the classpath to run this.
Sources (of Command-Line Tool) [md5] [sha1] — This
jar
includes all sources of the core command-line tool.JavaDoc (of Command-Line Tool) [md5] [sha1] — This
jar
includes the comprehensive javdoc documentation of the core command-line tool.Maven
pom
[md5] [sha1] — This file is used by Maven to automatically find all dependencies when you use our core code as part of your software.
Besides the downloads from our Maven repository, you can also download this version of the command-line tool from GitHub (where only the sources are provided).
Maven POM
The optimizationBenchmarking.org framework can also be used as library inside your own software. If you build with Maven, then you can use version 0.8.1 of the optimizationBenchmarking.org framework as external dependency by including the following information in your Maven POM.
<repositories>
<repository>
<id>optimizationBenchmarking</id>
<url>http://optimizationbenchmarking.github.io/optimizationBenchmarking/repo/</url>
</repository>
</repositories>
<dependencies>
<dependency>
<groupId>optimizationBenchmarking.org</groupId>
<artifactId>optimizationBenchmarking</artifactId>
<version>0.8.1</version>
</dependency>
</dependencies>
[home] • [downloads] • [project versions] • [atom feed] • [RSS feed]