Abstract

Extremum seeking denotes control approaches to steer the input of a dynamical system towards the optimizer of an associated cost function. The main strength of those schemes is to achieve their objective without any prior knowledge of the mathematical expression of the cost, nor the value of its gradient. However, those algorithms typically suffer from a slow convergence speed. In this paper, we propose a novel class of Newton-like extremum seeking systems separating the gradient and Hessian estimation from the update of the cost input towards the optimizer. The stability properties of the proposed schemes, including the ability of some of them to enforce the asymptotic convergence of the cost inputs to the optimizer, are proved. Their performances are also examined via some numerical examples. The latter will demonstrate the ability of the presented schemes to significantly reduce the convergence time compared with existing Newton-like schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call