Abstract

Summary We propose a novel event-triggered optimal tracking control algorithm for nonlinear systems with an infinite horizon discounted cost. The problem is formulated by appropriately augmenting the system and the reference dynamics and then using ideas from reinforcement learning to provide a solution. Namely, a critic network is used to estimate the optimal cost while an actor network is used to approximate the optimal event-triggered controller. Because the actor network updates only when an event occurs, we shall use a zero-order hold along with appropriate tuning laws to encounter for this behavior. Because we have dynamics that evolve in continuous and discrete time, we write the closed-loop system as an impulsive model and prove asymptotic stability of the equilibrium point and Zeno behavior exclusion. Simulation results of a helicopter, a one-link rigid robot under gravitation field, and a controlled Van-der-Pol oscillator are presented to show the efficacy of the proposed approach. Copyright © 2016 John Wiley & Sons, Ltd.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call