Abstract

Grey wolf optimizer (GWO) is an efficient swarm intelligence algorithm for kinds of optimization problems. However, GWO tends to be trapped in local optimum when solving large-scale problems. Social hierarchy is one of the main characteristics of GWO which affect the searching efficiency. Thus, an improved algorithm called hierarchy strengthened GWO (HSGWO) is proposed in this paper. First, the pack of wolves is roughly divided into two categories: dominant wolves and omega wolves. Second, the enhanced elite learning strategy is performed for dominant wolves to prevent the misguidance of low-ranking wolves and improve the collective efficiency. Then, the hybrid GWO and differential evolution (DE) strategy is executed for omega wolves to avoid falling into local optimum. In addition, a new hybrid one-dimensional and total-dimensional selection strategy is designed for omega wolves to balance the exploration and the exploitation during optimization. Finally, a perturbed operator is used to maintain the diversity of the population and further improve the exploration. To make a complete evaluation, the proposed HSGWO is first compared with six representative GWO variants for 50-dimensional problems based on CEC2014 benchmarks. The scalability of HSGWO is further tested by comparing it with eight state-of-the-art non-GWO algorithms for large-scale optimization problems with 100 decision variables. In addition, feature selection problem is used for testing the effectiveness of HSGWO on real-world applications. The experimental results demonstrated that the proposed algorithm outperforms other algorithms in terms of solution quality and convergence rate in most of the experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call