Abstract

The field of continuous optimisation has witnessed an explosion of the so-called new or novel metaheuristic algorithms. Though not all of these algorithms are efficient as proclaimed by their inventors, a few of them have proved to be very efficient and thus have become popular tools for solving complex optimisation problems. Therefore, there is a need for a systematic analysis approach to fairly evaluate and compare the results of some of these optimisation algorithms. In this paper, a set of well-known mathematical benchmark functions are compiled to provide an easily accessible collection of standard benchmark test problems for continuous global optimisation. This set of test problems are used to investigate the computational capabilities and the microscopic behaviour of twelve different metaheuristic algorithms. The required number of function evaluations for reaching the best solution and the run-time complexity of the algorithms are compared. Furthermore, statistical tests are conducted to validate the concluding remarks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.