Abstract

As the number of practical applications of discrete black-box metaheuristics is growing faster and faster, the benchmarking of these algorithms is rapidly gaining importance. While new algorithms are often introduced for specific problem domains, researchers are also interested in which general problem characteristics are hard for which type of algorithm. The W-Model is a benchmark function for discrete black-box optimization, which allows for the easy, fast, and reproducible generation of problem instances exhibiting characteristics such as ruggedness, deceptiveness, epistasis, and neutrality in a tunable way. We conduct the first large-scale study with the W-Model in its fixed-length single-objective form, investigating 17 algorithm configurations (including Evolutionary Algorithms and local searches) and 8372 problem instances. We develop and apply a machine learning methodology to automatically discover several clusters of optimization process runtime behaviors as well as their reasons grounded in the algorithm and model parameters. Both a detailed statistical evaluation and our methodology confirm that the different model parameters allow us to generate problem instances of different hardness, but also find that the investigated algorithms struggle with different problem characteristics. With our methodology, we select a set of 19 diverse problem instances with which researchers can conduct a fast but still in-depth analysis of algorithm performance. The best-performing algorithms in our experiment were Evolutionary Algorithms applying Frequency Fitness Assignment, which turned out to be robust over a wide range of problem settings and solved more instances than the other tested algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call