Abstract

Grey wolf optimizer (GWO) is a relatively new algorithm in the field of swarm intelligence for solving numerical optimization as well as real-world optimization problems. However, the paramount challenge in GWO is that it is prone to stagnation in local optima. The main goal of this paper is to improve the searchability of GWO when a new learning strategy is introduced in the algorithm. This new operator, called refraction learning, is essentially an opposite-learning strategy that is inspired by the principle of light refraction in physics. This proposed operator is applied to the current global optima of the swarm in the GWO algorithm and is beneficial to help the population for jumping out of the local optima. A novel variant of GWO called RL-GWO based on refraction learning is proposed. A theoretical proof of convergence is provided. We investigate the performance of RL-GWO using two sets of benchmark test functions, i.e., 23 widely used benchmark test functions, and 30 test functions from the IEEE CEC 2014. A non-parametric Wilcoxon's test is performed to observe the impact of improving the global optima in the algorithm. It is concluded that RL-GWO is an efficient, effective, and reliable algorithm for solving function optimization problems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.