Abstract
Gorilla troops optimizer (GTO) is a newly developed meta-heuristic algorithm, which is inspired by the collective lifestyle and social intelligence of gorillas. Similar to other metaheuristics, the convergence accuracy and stability of GTO will deteriorate when the optimization problems to be solved become more complex and flexible. To overcome these defects and achieve better performance, this paper proposes an improved gorilla troops optimizer (IGTO). First, Circle chaotic mapping is introduced to initialize the positions of gorillas, which facilitates the population diversity and establishes a good foundation for global search. Then, in order to avoid getting trapped in the local optimum, the lens opposition-based learning mechanism is adopted to expand the search ranges. Besides, a novel local search-based algorithm, namely adaptive β-hill climbing, is amalgamated with GTO to increase the final solution precision. Attributed to three improvements, the exploration and exploitation capabilities of the basic GTO are greatly enhanced. The performance of the proposed algorithm is comprehensively evaluated and analyzed on 19 classical benchmark functions. The numerical and statistical results demonstrate that IGTO can provide better solution quality, local optimum avoidance, and robustness compared with the basic GTO and five other wellknown algorithms. Moreover, the applicability of IGTO is further proved through resolving four engineering design problems and training multilayer perceptron. The experimental results suggest that IGTO exhibits remarkable competitive performance and promising prospects in real-world tasks.
Highlights
Optimization refers to the process of searching for the optimal solution to a particular issue under certain constraints, so as to maximize benefits, performance and productivity [1–4]
Afterwards, the basic Gorilla troops optimizer (GTO) and other five advanced meta-heuristic algorithms, such as GWO [65], Whale Optimization Algorithm (WOA) [29], SSA [66], Harris Hawks Optimization (HHO) [33], and Slime Mould Algorithm (SMA) [36], are employed as competitors to validate the improvements and superiority of the proposed algorithm based on the solution accuracy, boxplot, convergence behavior, average computation time, and statistical result
The adaptive β-hill climbing algorithm was hybridized with GTO to boost the quality of final solutions
Summary
Optimization refers to the process of searching for the optimal solution to a particular issue under certain constraints, so as to maximize benefits, performance and productivity [1–4]. The final type is influenced by human learning habits including Search Group Algorithm (SGA) [39], Soccer League Competition Algorithm (SLC) [40], and Teaching-Learning-Based Optimization (TLBO) [41]. Similar to other meta-heuristic algorithms, it still suffers from low optimization accuracy, premature convergence, and the propensity to fall into the local optimum when solving complex optimization problems [55]. These defects are mainly associated with the poor quality of the initial population, lack of a proper balance between the exploration and exploitation, and low likelihood of large spatial leaps in the iteration process.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have