Abstract

We compare 27 modifications of the original particle swarm optimization (PSO) algorithm. The analysis evaluated nine basic PSO types, which differ according to the swarm evolution as controlled by various inertia weights and constriction factor. Each of the basic PSO modifications was analyzed using three different distributed strategies. In the first strategy, the entire swarm population is considered as one unit (OC-PSO), the second strategy periodically partitions the population into equally large complexes according to the particle’s functional value (SCE-PSO), and the final strategy periodically splits the swarm population into complexes using random permutation (SCERand-PSO). All variants are tested using 11 benchmark functions that were prepared for the special session on real-parameter optimization of CEC 2005. It was found that the best modification of the PSO algorithm is a variant with adaptive inertia weight. The best distribution strategy is SCE-PSO, which gives better results than do OC-PSO and SCERand-PSO for seven functions. The sphere function showed no significant difference between SCE-PSO and SCERand-PSO. It follows that a shuffling mechanism improves the optimization process.

Highlights

  • Particle swarm optimization (PSO) is a stochastic, metaheuristic computational technique for searching the optimal regions from multidimensional space

  • The range of the problem space depends on the benchmark function (Table 2)

  • The range, which is in PSO optimization defined by lower and upper bounds of the search space, is portioned into n intervals of equal probability 1/n

Read more

Summary

Introduction

Particle swarm optimization (PSO) is a stochastic, metaheuristic computational technique for searching the optimal regions from multidimensional space. It is an optimization method inspired by social behaviour of organisms and was established by Kennedy and Eberhart in 1995 [1]. PSO’s main benefits are that there are few parameters to adjust and the method is easy to implement. Another advantage of PSO over derivative based local search methods is that there is no need for the gradient information during the iterative search when solving complicated optimization problems [4,5,6]

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call