Abstract

In this paper, an event-triggered optimal tracking control of discrete-time multi-agent systems is addressed by using reinforcement learning. In contrast to traditional reinforcement learning-based methods for optimal coordination and control of multi-agent systems with a time-triggered control mechanism, an event-triggered mechanism is proposed to update the controller only when the designed events are triggered, which reduces the computational burden and transmission load. The stability analysis of the closed-loop multi-agent systems with event-triggered controller is described. Further, to implement the proposed scheme, an actor-critic neural network learning structure is proposed to approximate performance indices and to on-line learn the event-triggered optimal control. During the training process, event-triggered weight tuning law has been designed, wherein the weight parameters of the actor neural networks are adjusted only during triggering instances compared with traditional methods with fixed updating periods. Further, a convergence analysis of the actor-critic neural network is provided via Lyapunov method. Finally, two simulation examples show the effectiveness and performance of the obtained event-triggered reinforcement learning controller.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.