Abstract
As a stochastic optimization algorithm, it is more reasonable for particle swarm optimization (PSO) to study the probabilistic convergence. In this study, we analyze its convergence with probability 1 using the theory of probabilistic metric space. Firstly, we assume that the personal best of the particle and global best of the particle swarm are updated during the run; however, we do not assume that the personal best of the particle and global best of the particle swarm must be independent of the position of the particle. Such an assumption is more pragmatic and could be implemented in all PSO variants. Then, we develop a stochastic recurrence relation of the state of a particle under this assumption. Finally, we derive a sufficient condition that ensures the stochastic PSO algorithm is τ-convergent with probability 1. In addition, we analyze the impact of the parameters in the unstable range on the individual iterations. Although these parameters could not guarantee the long-term convergence, their impact significantly influences the exploration ability of PSO; thus, it is crucial to understand them. Based on this analysis, we propose a novel strategy for balancing the exploitation and exploration abilities of the PSO algorithm.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have