Abstract

The optimal event-triggered control of nonlinear continuous-time systems by using input and output data is a challenging problem due to system uncertainties, non-availability of state vector and event-based sampled outputs between the plant and the controller. Therefore, a novel reinforcement learning-based approach is proposed to solve time-based near optimal event-triggered control of nonlinear continuous-time systems. First, by using measured input and output data, nonlinear continuous-time system is represented in the input-output form that is suitable for data-driven control. Then, an online neural network (NN) identifier is developed to estimate the control coefficient matrix from the input-output data which is subsequently utilized along with the critic NN to obtain a time-based near optimal event triggered control scheme in a forward-in-time manner. Novel apeiodic update laws are derived for NNs by using event trigger error while a novel event-trigger condition is designed to ensure the overall stability of proposed scheme. Eventually, Lyapunov analysis is utilized to demonstrate that all closed-loop signals and NN weights are ultimately bounded (UB).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call