Abstract

Large-scale optimization problems (LSOPs) have become increasingly significant and challenging in the evolutionary computation (EC) community. This article proposes a superiority combination learning distributed particle swarm optimization (SCLDPSO) for LSOPs. In algorithm design, a master–slave multi-subpopulation distributed model is adopted, which can obtain the full communication and information exchange among different subpopulations, further achieving the diversity enhancement. Moreover, a superiority combination learning (SCL) strategy is proposed, where each worse particle in the poor-performance subpopulation randomly selects two well-performance subpopulations with better particles for learning. In the learning process, each well-performance subpopulation generates a learning particle by merging different dimensions of different particles, which can fully combine the superiorities of all the particles in the current well-performance subpopulation. The worse particle can significantly improve itself by learning these two superiority combination particles from the well-performance subpopulations, leading to a successful search. Experimental results show that SCLDPSO performs better than or at least comparable with other state-of-the-art large-scale optimization algorithms on both CEC2010 and CEC2013 large-scale optimization test suites, including the winner of the competition on large-scale optimization. Besides, the extended experiments with increasing dimensions to 2000 show the scalability of SCLDPSO. At last, an application in large-scale portfolio optimization problems further illustrates the applicability of SCLDPSO.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call