Abstract

To improve the performance of the standard particle swarm optimization (PSO) which suffers from premature convergence and slow convergence speed, many PSO variants introduce lots of stochastic or aimless strategies to overcome the convergence problem. However, the mutual learning between elites particles is omitted, although which might be benefit to the convergence speed and, prevent the premature convergence. In this paper, we introduce DSLPSO, which integrates three novel strategies, specifically, tabu detecting, shrinking and local learning strategies, into PSO to overcome the aforementioned shortcomings. In DSLPSO, search space of each dimension is divided into many equal subregions. Then the tabu detecting strategy, which has good ergodicity for search space, helps the global historical best particle to detect a more suitable subregion, and thus help it jump out of a local optimum. The shrinking strategy enables DSLPSO to take optimization in a smaller search space and obtain a higher convergence speed. In the local learning strategy, a differential between two elites particles is used to increase solution accuracy. The experimental results show that DSLPSO has a superior performance in comparison with several other participant PSOs on most of the tested functions, as well as offering faster convergence speed, higher solution accuracy and stronger reliability.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.