Abstract

In this paper, an event-based near optimal control of uncertain nonlinear discrete time systems is presented by using input-output data and approximate dynamic programming (ADP). The nonlinear system dynamics in affine form are transformed into an input-output form. Then, three neural networks (NN) with event sampled input-output vector are used, namely, the identifier NN to relax the knowledge of the system dynamics, a critic NN to approximate the value function which is the solution to the Hamilton-Jacobi Bellman (HJB) equation, and an actor NN to approximate the optimal control policy, in an online manner without utilizing value or policy iterations. In addition, the NN weights of all the three NNs are tuned only at event-triggered instants leading to a novel non-periodic update rule to reduce computation when compared to traditional NN based scheme. Further, an event-trigger condition to decide the trigger instants is derived. Finally, the Lyapunov technique is used in conjunction with the event-trigger condition to guarantee the uniform ultimate boundedness (UUB) of the closed-loop system. The analytical design is substantiated with numerical results via simulation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call