Abstract

This paper develops a model-free approach to solve the event-triggered optimal consensus of multiple Euler-Lagrange systems (MELSs) via reinforcement learning (RL). Firstly, an augmented system is constructed by defining a pre-compensator to circumvent the dependence on system dynamics. Secondly, the Hamilton-Jacobi-Bellman (HJB) equations are applied to the deduction of the model-free event-triggered optimal controller. Thirdly, we present a policy iteration (PI) algorithm derived from RL, which converges to the optimal policy. Then, the value function of each agent is represented through a neural network to realize the PI algorithm. Moreover, the gradient descent method is used to update the neural network only at a series of discrete event-triggered instants. The specific form of the event-triggered condition is then proposed, and it is guaranteed that the closed-loop augmented system under the event-triggered mechanism is uniformly ultimately bounded (UUB). Meanwhile, the Zeno behavior is also eliminated. Finally, the validity of this approach is verified by a simulation example.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.