Abstract

High-dimensional problems are ubiquitous in many fields, yet still remain challenging to be solved. To tackle such problems with high effectiveness and efficiency, this article proposes a simple yet efficient stochastic dominant learning swarm optimizer. Particularly, this optimizer not only compromises swarm diversity and convergence speed properly, but also consumes as little computing time and space as possible to locate the optima. In this optimizer, a particle is updated only when its two exemplars randomly selected from the current swarm are its dominators. In this way, each particle has an implicit probability to directly enter the next generation, making it possible to maintain high swarm diversity. Since each updated particle only learns from its dominators, good convergence is likely to be achieved. To alleviate the sensitivity of this optimizer to newly introduced parameters, an adaptive parameter adjustment strategy is further designed based on the evolutionary information of particles at the individual level. Finally, extensive experiments on two high dimensional benchmark sets substantiate that the devised optimizer achieves competitive or even better performance in terms of solution quality, convergence speed, scalability, and computational cost, compared to several state-of-the-art methods. In particular, experimental results show that the proposed optimizer performs excellently on partially separable problems, especially partially separable multimodal problems, which are very common in real-world applications. In addition, the application to feature selection problems further demonstrates the effectiveness of this optimizer in tackling real-world problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call