Abstract

In traditional particle swarm optimization (PSO) algorithm, each particle updates its velocity and position with a learning mechanism based on its personal historical best position and the best population position. The learning mechanism in traditional PSO is simple and easy to implement, but it suffers some potential problems, such as being easily trapped in local optimum and insufficient balance. Thus, a novel random learning PSO with improved quasi-Newton exploitation mechanism (RQ-PSO) is proposed. Firstly, to improve the global search ability, a random learning mechanism is proposed through the analysis of PSO based on many kinds of learning mechanisms. Then, the random learning mechanism is effectively integrated into PSO to obtain strong global search ability and avoid falling into local optima. Finally, to keep a better balance between exploration and exploitation, an improved quasi-Newton method with strong exploitation ability is incorporated into RL-PSO. The experimental results on the complex functions from CEC-2013 and CEC-2017 test sets show that RQ-PSO outperforms the state-of-the-art PSO variants.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.