Optimization of multi-modal functions is challenging even for evolutionary and swarm-based algorithms as it requires an efficient exploration for finding the promising region of the search space, and effective exploitation to precisely find the global optimum. Grey Wolf Optimizer (GWO) is a recently developed metaheuristic algorithm that is inspired by nature with a relatively small number of parameters for tuning. However, GWO and most of its variants may suffer from the lack of population diversity, premature convergence, and the inability to preserve a good balance between exploratory and exploitative behaviors. To address these limitations, this work proposes a new variant of GWO incorporating memory, evolutionary operators, and a stochastic local search technique. It further integrates Linear Population Size Reduction (LPSR) technique. The proposed algorithm is comprehensively tested on 23 numerical benchmark functions, high dimensional benchmark functions, 13 engineering case studies, four data classifications, and three function approximation problems. The benchmark functions are mostly taken from the CEC 2005 and CEC 2010 special sessions, and they include rotated, shifted functions. The engineering case studies are from the CEC 2020 real-world non-convex constrained optimization problems. The performance of the proposed GWO is compared with popular metaheuristics, namely, particle swarm optimization (PSO), gravitational search algorithm (GSA), slap swarm algorithm (SSA), differential evolution (DE), self-adaptive differential evolution (SADE), basic GWO and its three recently improved variants. Statistical analysis and Friedman tests have been conducted to thoroughly compare their performance. The obtained results demonstrate that the proposed GWO outperforms the algorithms compared for the benchmark functions and engineering case studies tested.
Read full abstract