Abstract

Particle Swarm Optimization (PSO) is an effective tool in solving optimization problems. However, PSO usually suffers from the premature convergence due to the quick losing of the swarm diversity. In this paper, we first analyze the motion behavior of the swarm based on the probability characteristic of learning parameters. Then a PSO with double learning patterns (PSO-DLP) is developed, which employs the master swarm and the slave swarm with different learning patterns to achieve a trade-off between the convergence speed and the swarm diversity. The particles in the master swarm and the slave swarm are encouraged to explore search for keeping the swarm diversity and to learn from the global best particle for refining a promising solution, respectively. When the evolutionary states of two swarms interact, an interaction mechanism is enabled. This mechanism can help the slave swarm in jumping out of the local optima and improve the convergence precision of the master swarm. The proposed PSO-DLP is evaluated on 20 benchmark functions, including rotated multimodal and complex shifted problems. The simulation results and statistical analysis show that PSO-DLP obtains a promising performance and outperforms eight PSO variants.

Highlights

  • Particle Swarm Optimization (PSO) [1, 2], firstly proposed by Kennedy and Eberhart in 1995, was inspired by the simulation of simplified social behaviors including fish schooling and bird flocking

  • To investigate the effects of them, we studied the performance of PSO-W, PSO with UEL (PSO-UEL), PSO with enhanced exploitation learning (EEL) (PSOEEL), and the complete PSO with double learning patterns (PSO-DLP)

  • We develop a Particle Swarm Optimization with double learning patterns (PSO-DLP), which uses two swarms with different searching ability and the interaction mechanism between swarms to control exploration and exploitation searches

Read more

Summary

Introduction

Particle Swarm Optimization (PSO) [1, 2], firstly proposed by Kennedy and Eberhart in 1995, was inspired by the simulation of simplified social behaviors including fish schooling and bird flocking. PSO has been successfully applied to optimization problems in many fields [4,5,6,7]. In the basic PSO [1], each particle in the swarm learns from pbest and gbest. Gbest is the only shared information acquired by the whole swarm, which leads to all particles converging to the same destination and the diversity losing quickly. The learning mechanism of the basic PSO can cause a fast convergence rate, but it leads to the premature convergence when solving multimodal optimization problems. In order to overcome this problem, researchers proposed many strategies to improve it

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call