Abstract

The grey wolf optimization (GWO) algorithm is widely utilized in many global optimization applications. In this paper, a dynamic opposite learning-assisted grey wolf optimizer (DOLGWO) was proposed to improve the search ability. Herein, a dynamic opposite learning (DOL) strategy is adopted, which has an asymmetric search space and can adjust with a random opposite point to enhance the exploitation and exploration capabilities. To validate the performance of DOLGWO algorithm, 23 benchmark functions from CEC2014 were adopted in the numerical experiments. A total of 10 popular algorithms, including GWO, TLBO, PIO, Jaya, CFPSO, CFWPSO, ETLBO, CTLBO, NTLBO and DOLJaya were used to make comparisons with DOLGWO algorithm. Results indicate that the new model has strong robustness and adaptability, and has the significant advantage of converging to the global optimum, which demonstrates that the DOL strategy greatly improves the performance of original GWO algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.