Abstract

In recent years, swarm-based stochastic optimizers have achieved remarkable results in tackling real-life problems in engineering and data science. When it comes to the particle swarm optimization (PSO), the comprehensive learning PSO (CLPSO) is a well-established evolutionary algorithm that introduces a comprehensive learning strategy (CLS), which effectively boosts the efficacy of the PSO. However, when the single modal function is processed, the convergence speed of the algorithm is too slow to converge quickly to the optimum during optimization. In this paper, the elite-based dominance scheme of another well-established method, grey wolf optimizer (GWO), is introduced into the CLPSO, and the grey wolf local enhanced comprehensive learning PSO algorithm (GCLPSO) is proposed. Thanks to the exploitative trends of the GWO, the algorithm improves the local search capacity of the CLPSO. The new variant is compared with 15 representative and advanced algorithms on IEEE CEC2017 benchmarks. Experimental outcomes have shown that the improved algorithm outperforms other comparison competitors when coping with four different kinds of functions. Moreover, the algorithm is favorably utilized in feature selection and three constrained engineering construction problems. Simulations have shown that the GCLPSO is capable of effectively dealing with constrained problems and solves the problems encountered in actual production.

Highlights

  • Optimization problems are common problems in real life, and we need to achieve the best solution when tackling a specific problem

  • We reduce the manufacturing cost of the model by optimizing the variable internal radius (R), head thickness (Th), shell thickness (Ts), and cross-section range minus head (L). e mathematical expression of the model is shown as follows: Consider →x 􏼂 x1 x2 x3 x4 􏼃 􏼂 Ts Th R L 􏼃

  • Directions is paper presents an improved algorithm named GCLPSO. is algorithm introduces the grey wolf optimizer (GWO) into comprehensive learning PSO (CLPSO) to improve the local search capability of the CLPSO. e GCLPSO achieves a more stable status between global search and local search, which boosts the ability to search for the optimal solution. e improved algorithm was compared with seven classical MAs and eight advanced metaheuristic algorithms on the CEC2017 benchmark functions

Read more

Summary

Introduction

Optimization problems are common problems in real life, and we need to achieve the best solution when tackling a specific problem. With the increase of complexity of the problem, the traditional gradient-based method is difficult to better optimize some types of problems [1, 2] To deal with this problem, metaheuristic algorithms are widely used in real life. Liang et al proposed the CLPSO [61] algorithm in 2006 It uses a new comprehensive learning strategy (CLS) that uses the personal best position of the particle, pbest, to update the speed of the particle. Fi(D)] represents the learning sample vector defined for particle i, and pbestfi(d),d represents the optimal position of all particles pbest with the corresponding dimension value. E corresponding dimension value will be learned from its own pbest if the random number is greater than Pc. When a dimension of a particle requires to update the speed, it will produce a random number.

Objectives
Methods
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.