Abstract

Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms.

Highlights

  • This paper considers the following global optimization problem: min f (x) (1)s.t. x ∈ X, where x is a continuous variable vector with domain X ⊂ Rn defined by the bound constraint lj ≤ xj ≤ uj, j = 1, . . . , n

  • More attention has been paid to stochastic algorithms recently, and many effective algorithms have been presented, including Simulated Annealing (SA) [1], Genetic Algorithm (GA) [2, 3], Differential Evolution (DE) [4], Particle Swarm Optimization (PSO) [5], Ant Colony Optimization (ACO) [6], Artificial Bee Colony (ABC) [7], and Harmony Search (HS) [8]

  • The performance of Novel PSO Algorithm (NPSO) algorithm is compared to PSO algorithm by evaluating convergence and best solution found for 14 benchmark functions, where f7–f11 are shifted functions and f12–f14 are rotated functions

Read more

Summary

Introduction

The function f(x) : X → R is a continuous real-valued function. Many real-world problems, such as engineering and related areas, can be reduced to formulation (1). This problem usually has many local optima, so it is difficult to find its global optimum. For solving such problem, researchers have presented many methods during the past years, which can be divided into two groups: deterministic and stochastic algorithms. Most deterministic algorithms usually effective for unimodal functions have one global optimum and need gradient information. Stochastic algorithms do not require any properties of the objective function. More attention has been paid to stochastic algorithms recently, and many effective algorithms have been presented, including Simulated Annealing (SA) [1], Genetic Algorithm (GA) [2, 3], Differential Evolution (DE) [4], Particle Swarm Optimization (PSO) [5], Ant Colony Optimization (ACO) [6], Artificial Bee Colony (ABC) [7], and Harmony Search (HS) [8]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call