Abstract
In this paper, an event-triggered optimal tracking control of discrete-time multi-agent systems is addressed by using reinforcement learning. In contrast to traditional reinforcement learning-based methods for optimal coordination and control of multi-agent systems with a time-triggered control mechanism, an event-triggered mechanism is proposed to update the controller only when the designed events are triggered, which reduces the computational burden and transmission load. The stability analysis of the closed-loop multi-agent systems with event-triggered controller is described. Further, to implement the proposed scheme, an actor-critic neural network learning structure is proposed to approximate performance indices and to on-line learn the event-triggered optimal control. During the training process, event-triggered weight tuning law has been designed, wherein the weight parameters of the actor neural networks are adjusted only during triggering instances compared with traditional methods with fixed updating periods. Further, a convergence analysis of the actor-critic neural network is provided via Lyapunov method. Finally, two simulation examples show the effectiveness and performance of the obtained event-triggered reinforcement learning controller.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have