Abstract

In this paper, we investigate an event-triggered optimal control problem of discounted-cost nonlinear systems. Initially, we present the event-triggered Hamilton-Jacobi-Bellman equation (ET-HJBE) associated with the solution of the optimal control problem. Meanwhile, the event-triggering condition is proposed. Then, within the framework of reinforcement learning, we utilize a unique critic network to approximately solve the ET-HJBE. The weight vector used in the critic network is tuned via the combination of the gradient descent method and the experience replay technique. An advantage of the weight tuning rule is that the historical state data can be fully used. Moreover, by employing the classic Lyapunov approach, we prove that all the signals in the closed-loop system are uniformly ultimately bounded. Finally, we verify the effectiveness of the present event-triggered control method through simulations of the continuous stirred tank reactor system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call