Abstract

In view of the premature convergence of particle swarm optimization (PSO) that is often caused by the loss of diversity, an improved cooperative PSO (ICPSO) is proposed. The method can dynamically combine the optimum values of the particles themselves, the global particles and the optimum values in groups, use the current optimization stage to dynamically adjust the shared proportion of information and effectively fuse various reference information, which can obtain superior global and local optimization performance. Additionally, to improve the diversity of the algorithm, a dynamic adjustment method using the grouping coefficient [Formula: see text] for the convergence rate is put forward. This method makes the algorithm have a more appropriate convergence rate while improving the convergence precision and enhancing the performance of the algorithm. Finally, the algorithm is used to optimize a neural network. The convergence condition and convergence rate of the algorithm are assessed by theoretical analysis and simulation experiments. The results show that ICPSO has more advantages in its diversity and the adjustment of the convergence rate compared to other related algorithms. Regarding neural network optimization, the training speed and optimization precision of the ICPSO-BP neural network are the highest, which has reached the best and average level of classification accuracy 98.5%, 96.3% for 20 iterations in Iris, and 98.7%, 95.1% in Wine. Its average iteration times score the best in five problems out of six.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call