Abstract

The grey wolf optimizer (GWO) algorithm is a recently developed, novel, population‐based optimization technique that is inspired by the hunting mechanism of grey wolves. The GWO algorithm has some distinct advantages, such as few algorithm parameters, strong global optimization ability, and ease of implementation on a computer. However, the paramount challenge is that there are some cases where the GWO is prone to stagnation in local optima. This drawback of the GWO algorithm may be attributed to an insufficiency in its position‐updated equation, which disregards the positional interaction information about the three best grey wolves (i.e., the three leaders). This paper proposes an improved version of the GWO algorithm that is based on a dynamically dimensioned search, spiral walking predation technique, and positional interaction information (referred to as the DGWO). In addition, a nonlinear control parameter strategy, i.e., the control parameter that is nonlinearly increased with an increase in iterations, is designed to balance the exploration and exploitation of the GWO algorithm. The experimental results for 23 general benchmark functions and 3 well‐known engineering optimization design applications validate the effectiveness and feasibility of the proposed DGWO algorithm. The comparison results for the 23 benchmark functions show that the proposed DGWO algorithm performs significantly better than the GWO and its improved variant for most benchmarks. The DGWO provides the highest solution precision, strongest robustness, and fastest convergence rate among the compared algorithms in almost all cases.

Highlights

  • Complexity trial solutions; and in terms of exploration ability, population-based heuristic algorithms are superior to singlesolution-based heuristic algorithms. e genetic algorithm (GA) is used to address the characterization of hyperelastic materials [12, 13]

  • To validate the performance of the proposed DGWO algorithm, 23 benchmark problems with various complexity and sizes are collected from studies [21, 23, 43]. e characteristics of the selected test functions are summarized in Table 1, where fmin denotes the global optimal value

  • 0.3980 ± 2.11E − 04 3.0001 ± 2.41E − 04 − 3.8590 ± 5.87E − 03 − 3.1725 ± 5.27E − 02 − 10.0984 ± 1.56E − 01 − 10.2969 ± 2.58E − − 10.5334 ± 2.94E − on function f18. To better understand this phenomenon, we need to know that the nonlinear control parameter strategy was designed for the modified position-updated equation and is not suitable for independent use in the search process. us, the performance of the DGWO significantly outperformed that of the DGWO-2

Read more

Summary

Overview of GWO and DDS

For have shown the that standard GWO algorithm, the value of the control spevarearaml estteurdi→eas linearly changes and the design of the position-updated equation will cause some drawbacks, such as premature convergence of the algorithm and powerlessness when solving multimodal problems [12, 27, 42]. E key idea for the DDS algorithm to transit from a global search to a local search is to dynamically and probabilistically reducing the number of dimensions to be perturbed in the neighborhood of the current best solution [11, 43]. →new otherwise, the current best solution X (t) is reserved for the iteration. e pseudocode description of the DDS algorithm is presented in Algorithm 2 [11]

Proposed Algorithm
Results and Discussion
Methods
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.