Abstract

Particle swarm optimization (PSO) is one of the most popular, nature inspired optimization algorithms. The canonical PSO is easy to implement and converges fast, however, it suffers from premature convergence. The comprehensive learning particle swarm optimization (CLPSO) can achieve high exploration while it converges relatively slowly on unimodal problems. To enhance the exploitation of CLPSO without significantly impairing its exploration, a multi-leader (ML) strategy is combined with CLPSO. In ML strategy, a group of top ranked particles act as the leaders to guide the motion of the whole swarm. Each particle is randomly assigned with an individual leader and the leader is refreshed dynamically during the optimization process. To activate the stagnated particles, an adaptive mutation (AM) strategy is introduced. Combining the ML and the AM strategies with CLPSO simultaneously, the resultant algorithm is referred to as multi-leader comprehensive learning particle swarm optimization with adaptive mutation (ML-CLPSO-AM). To evaluate the performance of ML-CLPSO-AM, the CEC2017 test suite was employed. The test results indicate that ML-CLPSO-AM performs better than ten popular PSO variants and six other types of representative evolutionary algorithms and meta-heuristics. To validate the effectiveness of ML-CLPSO-AM in real-life applications, ML-CLPSO-AM was applied to economic load dispatch (ELD) problems.

Highlights

  • Optimization problems are commonly found in science and engineering applications

  • ML-comprehensive learning particle swarm optimization (CLPSO) increases significantly and outperforms ML-CLPSO(10) because the adaptive mutation (AM) strategy activates the with different leader size were tested and their diversity curve and convergence curve are given in stagnated particles to explore the potentially promising area and the performance of ML-CLPSO-AM is improved

  • The diversity of CLPSO-AM is approximately equal to that of comprehensive learning particle swarm optimization with Gbest (CLPSO-G) in the early stage, while in the latter stage, the diversity of CLPSO-AM is lower than CLPSO-G. This is because the AM strategy encourages the motion of the stagnated particles by transferring them to explore the possible promising area while in CLPSO-G, if one particle’s fitness value ceases improving, its Pbest may stay in the same position unless a better position is found by the particle, the diversity of CLPSO-G is higher than CLPSO-AM in the latter stage

Read more

Summary

Introduction

Optimization problems are commonly found in science and engineering applications. Nowadays, these optimization problems are getting more and more complex. Canonical PSO moves the particle by the attractive force from the global best position (Gbest) and the particle’s own personnel best position (Pbest) This mechanism can a obtain high convergence rate, the canonical PSO suffers from premature convergenceon complex multimodal problem. To solve the premature convergence problem in traditional PSO, Dong et al [43] proposed an opposition based particle swarm optimization with adaptive mutation strategy (AMOPSO). To enhance exploitation of CLPSO without significantly weakening its exploration, a multi-leader (ML) strategy is combined with CLPSO and the resultant algorithm is called multi-leader comprehensive learning particle swarm optimization (ML-CLPSO). To activate the stagnated particles, an adaptive mutation (AM) strategy is incorporated into ML-CLPSO and the resultant algorithm is referred to as multi-leader comprehensive learning particle swarm optimization with adaptive mutation (ML-CLPSO-AM). The rest of this paper is organized as follows: Section 2 reviews the related work, Section 3 introduces the methodologies, Section 3 reports the experimental results, Section 5 examines the application to ELD problems, and Section 6 concludes the paper

Canonical PSO
The Social Learning Leader
Mutation
Multi-Leader Strategy
Adaptive Mutation Strategy
Test Problems
Parameters Settings
Comparison Test of Different Strategies
Comparison Test with PSO Variants
Convergence
Comparison Test with EAs and Meta-Heuristics
Parameter Sensitive Analysis
Problem Definition
Objective
Comparison of PSO Algorithms on ELDProblem
Comparison with ELD Tailored Algorithms
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.