Abstract

We present a differential particle swarm evolution (DPSE) algorithm which combines the basic idea of velocity and position update rules from particle swarm optimization (PSO) and the concept of differential mutation from differential evolution (DE) in a new way. With the goal of optimizing within a limited number of function evaluations, the algorithm is tested and compared with the standard PSO and DE methods on 14 benchmark problems to illustrate that DPSE has the potential to achieve a faster convergence and a better solution. Simulation results show that, on the average, DPSE outperforms DE by 39.20% and PSO by 14.92% on the 14 benchmark problems. To show the feasibility of the proposed strategy on a real-world optimization problem, an application of DPSE to optimize the parameters of active disturbance rejection control (ADRC) in PUMA-560 robot is presented. I. INTRODUCTION The particle swarm optimization (PSO) algorithm was originally introduced in (1) as an alternative to the standard genetic algorithm (GA). The PSO was inspired by insect swarms and has since proven to be a competitor to the standard GA when it comes to function optimization. Since then several researchers have analyzed the PSO performance and disadvantages (2-4) and their research indicates that it performs well in the early iterations but has problems reaching a near optimal solution in several real-valued function optimization problems. Differential evolution (DE) is a simple yet powerful evolutionary algorithm (EA) for global optimization introduced by Price and Storn (5). Both PSO and DE received great interest from the evolutionary computation community, and showed great promise in several real-world applications (6-9). As a result, a lot of effort has been spent recently in combining both methods to achieve better optimization result. One method proposed in 2003 (10) applies to the modeling of gene regulator networks in 2007 (11). In this algorithm, the mutations provided by DE operator are applied, and only applied on the personal best individual to prevent the swarm from disorganizing by unexpected fluctuations. Later in 2008, (12) also presents a method of hybrid PSO with DE and its application to a high-frequency transformer can be seen in (13). In this algorithm, the DE mutations are applied to update both personal best and global best. At the same time, Swagatam Das et al. (14) proposed a hybridization of PSO and DE for continuous optimization in 2008. Based on that idea, (15) apply this algorithm for the black-box optimization benchmarking for noisy functions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call