Abstract

Various benchmark sets have already been proposed to facilitate comparison between metaheuristics, or Evolutionary Algorithms. During the competition, typically algorithms are allowed either to run until the allowed number of function calls is exhausted (and one compares the quality of solutions found), or until a required objective function value is obtained (one compares the speed in reaching the required solution). During the last 20 years several problem sets were defined using the first approach. In this study, we test 73 optimization algorithms proposed between the 1960′s and 2022 on nine competitions based on four sets of problems (CEC 2011, CEC 2014, CEC 2017, and CEC 2020) with different dimensionalities. We intend to test the original versions of 73 algorithms “as they are”, with control parameters proposed by the authors of the particular method. The recent benchmark set, CEC 2020, includes fewer problems and allows much more function calls than the former sets. As a result, one group of algorithms perform best on older, a different one on the more recent (CEC 2020) benchmark sets. Almost all algorithms that perform best on CEC 2020 set achieve moderate-to-poor performance on older sets, including real-world problems from CEC 2011. Algorithms that perform best on older sets are more flexible than those that perform best on CEC 2020 benchmark. The choice of the benchmark may have a crucial impact on the final ranking of algorithms. The lack of tuning may affect the results that were obtained in this study, hence it is highly recommended to repeat a similar large-scale comparison with control parameters of each algorithm tuned, best by different methods, separately for each benchmark set.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call