Abstract

SummaryWe present a generalization of the scalar gradient extremum seeking (ES) algorithm, which maximizes static maps in the presence of infinite‐dimensional dynamics described by parabolic partial differential equations (PDEs). The PDE dynamics contains reaction‐advection‐diffusion (RAD) like terms. Basically, the effects of the PDE dynamics in the additive dither signals are canceled out using the trajectory generation paradigm. Moreover, the inclusion of a boundary control for the PDE process stabilizes the closed‐loop feedback system. By properly demodulating the map output corresponding to the manner in which it is perturbed, the ES algorithm maximizes the output of the unknown map. In particular, our parabolic PDE compensator employs the same perturbation‐based (averaging‐based) estimate for the Hessian of the function to be maximized applied in the previous publications free of PDEs. We prove local stability of the algorithm, real‐time maximization of the map and convergence to a small neighborhood of the desired (unknown) extremum by means of backstepping transformation, Lyapunov functional and the theory of averaging in infinite dimensions. Finally, we present the generalization to the scalar Newton‐based ES algorithm, which maximizes the map's higher derivatives in the presence of RAD‐like dynamics. By modifying the demodulating signals, the ES algorithm maximizes the nth derivative only through measurements of the own map. The Newton‐based ES approach removes the dependence of the convergence rate on the unknown Hessian of the higher derivative, an effort to improve performance and remove limitations of standard gradient‐based ES. Numerical examples support the theoretical results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call