Abstract

The authors got the motivation for writing the article based on an issue, with which developers of the newly developed nature-inspired algorithms are usually confronted today: How to select the test benchmark such that it highlights the quality of the developed algorithm most fairly? In line with this, the CEC Competitions on Real-Parameter Single-Objective Optimization benchmarks that were issued several times in the last decade, serve as a testbed for evaluating the collection of nature-inspired algorithms selected in our study. Indeed, this article addresses two research questions: (1) How the selected benchmark affects the ranking of the particular algorithm, and (2) If it is possible to find the best algorithm capable of outperforming all the others on all the selected benchmarks. Ten outstanding algorithms (also winners of particular competitions) from different periods in the last decade were collected and applied to benchmarks issued during the same time period. A comparative analysis showed that there is a strong correlation between the rankings of the algorithms and the benchmarks used, although some deviations arose in ranking the best algorithms. The possible reasons for these deviations were exposed and commented on.

Highlights

  • The purpose of the article is searching for an answer to the question how selection of the benchmark suite affects determining the quality of the newly developed nature-inspired algorithms

  • EXPERIMENTS AND RESULTS The purpose of our experimental work was to show that the following two hypotheses hold: (1) Selecting the Conference of Evolutionary Computation (CEC) Competition on a Real-Parameter Single-Objective Optimization benchmark issued in the last decade does not influence the assessment of the quality of the newly developed algorithm significantly, and (2) It is possible to find the algorithm achieving the best results on all observed benchmarks

  • Three sources served to help us in implementing the algorithms in Table 1: (1) The DE, jDE, and Self-adaptive DE (SaDE) algorithms based on the original implementation of DE in C/C++ taken from the official Berkeley University of California web sites [30], (2) Artificial Bee Colony (ABC) on the implementation of the original imlementation in C/C++ found on Karaboga’s web sites [31], while (3) The implementation of the other algorithms appearing in CEC competions, i.e., Search Equation-based Artificial Bee Colony (SSEABC), Success-History based Adaptive DE (SHADE), LSHADE, iL-SHADE, jSO, LSHADE_RSP, were downloaded from Prof

Read more

Summary

Introduction

The purpose of the article is searching for an answer to the question how selection of the benchmark suite affects determining the quality of the newly developed nature-inspired algorithms. Observing the principles found in nature, like tracing the ant trails, watching termites build their nests (mounds), inspecting wolves and their hunting habits in deep forests, investigating the flying traces of birds, and even admiring the small lightning bugs, called fireflies, in the young summer nights, have led to raising the development of nature-inspired algorithms. All these inspirations can be treated as optimization processes, while the mathematical formulation for their description presents a basis for building the optimization algorithms

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call