Abstract

Particle swarm optimization (PSO) is an iterative search method that moves a set of candidate solution around a search-space towards the best known global and local solutions with randomized step lengths. PSO frequently accelerates optimization in practical applications, where gradients are not available and function evaluations expensive. Yet the traditional PSO algorithm ignores the potential knowledge that could have been gained of the objective function from the observations by individual particles. Hence, we draw upon concepts from Bayesian optimization and introduce a stochastic surrogate model of the objective function. That is, we fit a Gaussian process to past evaluations of the objective function, forecast its shape and then adapt the particle movements based on it. Our computational experiments demonstrate that baseline implementations of PSO (i. e., SPSO2011) are outperformed. Furthermore, compared to, state-of-art surrogate-assisted evolutionary algorithms, we achieve substantial performance improvements on several popular benchmark functions. Overall, we find that our algorithm attains desirable properties for exploratory and exploitative behavior.

Highlights

  • Stochastic optimization methods refer to optimization methods that incorporate random variables into a search process (Gentle, Härdle, and Mori, 2012, Chapter 7) and often improves the performance in a large variety of practical settings (Hoos & Stützle, 2005)

  • Several variants to the original Particle swarm optimization (PSO) algorithm have been developed (e. g., SPSO2011; Zambrano-Bigiarini, Clerc, & Rojas, 2013), which we summarize in our review section

  • We propose a combination of the PSO mechanism with a stochastic surrogate model of the objective function, so that the swarm search can be directed strategically

Read more

Summary

Introduction

Stochastic optimization methods refer to optimization methods that incorporate random variables into a search process (Gentle, Härdle, and Mori, 2012, Chapter 7) and often improves the performance in a large variety of practical settings (Hoos & Stützle, 2005). One can only query a function f for single points x, for which the corresponding evaluation f (x) is returned. Such problems are prevalent in numerous applications from engineering, medicine and economics among others, where the underlying function is computationally or economically expensive to evaluate (Rios & Sahinidis, 2013). The underlying function is expensive to evaluate and, in these case, we might prefer to terminate the search process after a certain number of iterations or when the relative convergence fulfills predefined criteria.

Objectives
Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call