Skip header navigation
×

Conference Proceeding

Investigating Benchmark Correlations when Comparing Algorithms with Parameter Tuning

Citation
Christie LA, Brownlee A & Woodward JR (2018) Investigating Benchmark Correlations when Comparing Algorithms with Parameter Tuning. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. Genetic and Evolutionary Computation Conference 2018, 15.07.2018-19.07.2018. New York: ACM, pp. 209-210. https://doi.org/10.1145/3205651.3205747

Abstract
Benchmarks are important for comparing performance of optimisation algorithms, but we can select instances that present our algorithm favourably, and dismiss those on which our algorithm under-performs. Also related are automated design of algorithms, which use problem instances (benchmarks) to train an algorithm: careful choice of instances is needed for the algorithm to generalise. We sweep parameter settings of differential evolution to applied to the BBOB benchmarks. Several benchmark functions are highly correlated. This may lead to the false conclusion that an algorithm performs well in general, when it performs poorly on a few key instances. These correlations vary with the number of evaluations.

Keywords
benchmarks; BBOB; ranking; differential evolution; continuous optimisation; parameter tuning; automated design of algorithms

StatusPublished
Author(s)Christie, Lee A; Brownlee, Alexander; Woodward, John R
FundersEngineering and Physical Sciences Research Council and Engineering and Physical Sciences Research Council
Publication date31/12/2018
Publication date online31/07/2018
URLhttp://hdl.handle.net/1893/27083
Related URLshttp://hdl.handle.net/11667/109;
PublisherACM
Place of publicationNew York
ISBN978-1-4503-5764-7
ConferenceGenetic and Evolutionary Computation Conference 2018
Dates
Scroll back to the top