Abstract

PurposeThe purpose of this paper is the introduction of a globally convergent algorithm into a framework for global derivative‐free optimization, such as particle swarm optimization (PSO) for which a full proof of convergence is currently missing.Design/methodology/approachThe substitution of the classical PSO iteration by the Newton method is suggested when the global minimum is not improved. Use of surrogate models for the computation of the Hessian of the objective function is a key point for the overall computational effort. Adoption of a trust‐region approach guarantees the consistency of the present approach with the original formulation.FindingsThe approach proposed is mostly found to be an improvement of the classical PSO method. The use of surrogate models and the trust‐region approach maintains the overall computational effort at the same level as the original algorithm.Research limitations/implicationsAlthough the number of algebraic test functions is pretty large, a single practical example is provided. Further numerical experiments are needed in order to increase the generality of the conclusions.Practical implicationsThe proposed method improves the efficiency of the standard PSO algorithm.Originality/valuePrevious literature does not provide comprehensive systematic studies for coupling PSO with local search algorithms. This paper is a contribution for closing the gap.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call