Abstract

Large-scale optimization has become a significant and challenging research topic in the evolutionary computation (EC) community. Although many improved EC algorithms have been proposed for large-scale optimization, the slow convergence in the huge search space and the trap into local optima among massive suboptima are still the challenges. Targeted to these two issues, this article proposes an adaptive granularity learning distributed particle swarm optimization (AGLDPSO) with the help of machine-learning techniques, including clustering analysis based on locality-sensitive hashing (LSH) and adaptive granularity control based on logistic regression (LR). In AGLDPSO, a master-slave multisubpopulation distributed model is adopted, where the entire population is divided into multiple subpopulations, and these subpopulations are co-evolved. Compared with other large-scale optimization algorithms with single population evolution or centralized mechanism, the multisubpopulation distributed co-evolution mechanism will fully exchange the evolutionary information among different subpopulations to further enhance the population diversity. Furthermore, we propose an adaptive granularity learning strategy (AGLS) based on LSH and LR. The AGLS is helpful to determine an appropriate subpopulation size to control the learning granularity of the distributed subpopulations in different evolutionary states to balance the exploration ability for escaping from massive suboptima and the exploitation ability for converging in the huge search space. The experimental results show that AGLDPSO performs better than or at least comparable with some other state-of-the-art large-scale optimization algorithms, even the winner of the competition on large-scale optimization, on all the 35 benchmark functions from both IEEE Congress on Evolutionary Computation (IEEE CEC2010) and IEEE CEC2013 large-scale optimization test suites.

Highlights

  • E VOLUTIONARY computation (EC) algorithms, including evolutionary algorithms (EAs) and swarm intelligence algorithms (SIs) [1]–[9], such as genetic algorithm (GA) [10], [11]; differential evolution (DE) [12]–[15]; particle swam optimization (PSO) [16]–[18]; and ant colony optimization (ACO) [19]–[21], have been widely studied and applied in many real-world optimization problems

  • To further improve the searching ability and achieve an adaptive algorithm, this article develops a novel adaptive granularity learning distributed PSO (AGLDPSO) with the help of Machine learning (ML) techniques, including clustering analysis based on locality-sensitive hashing (LSH) and adaptive granularity control based on logistic regression (LR)

  • We compare the results obtained by adaptive granularity learning distributed particle swarm optimization (AGLDPSO) with six PSO-based large-scale optimization algorithms, including CCPSO2 [30], SL-PSO [36], CSO [37], Dynamic segmentbased predominant learning swarm optimizer (DSPLSO) [48], dynamic level-based learning swarm optimizer (DLLSO) [49], and DMS-L-PSO [42]

Read more

Summary

INTRODUCTION

E VOLUTIONARY computation (EC) algorithms, including evolutionary algorithms (EAs) and swarm intelligence algorithms (SIs) [1]–[9], such as genetic algorithm (GA) [10], [11]; differential evolution (DE) [12]–[15]; particle swam optimization (PSO) [16]–[18]; and ant colony optimization (ACO) [19]–[21], have been widely studied and applied in many real-world optimization problems. If we can further introduce the adaptive granularity control and find the appropriate population size to meet the search requirement of the different evolutionary states in different problems, the search process will be more effective. Since EC algorithms have stored ample data about the search space, problem features, and population information during the iterative search process, the ML technique is helpful in analyzing these data to further enhance the search performance In this way, useful information can be extracted to analyze the evolutionary state and to achieve the adaptive granularity control. The AGLS is helpful to determine an appropriate subpopulation size to control the learning granularity of the distributed subpopulations in different evolutionary states to balance the exploration ability for escaping from massive suboptima and the exploitation ability for converging in the huge search space.

RELATED WORK
PSO Variants for Large-Scale Optimization
Master–Slave Multisubpopulation Distributed Framework
Velocity and Position Update
Complete AGLDPSO Algorithm
Experimental Setup
Comparison With Winner of IEEE CEC2010 Competition
Scalability of AGLDPSO on 2000-D Problems
Effects of AGLS
Influences of Parameters
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call