Abstract

In this paper, to reduce the computational and communication burden, the event-triggered distributed zero-sum differential game problem for multi-agent systems is investigated. Firstly, based on the Minimax principle, an adaptive event-triggered distributed iterative differential game strategy is derived with an adaptive triggering condition for updating the control scheme aperiodically. Then, to implement this proposed strategy, the solution of coupled Hamilton–Jacobi–Isaacs​ (HJI) equation is approximated by constructing the critic neural network (NN). In order to further relax the restrictive persistent of excitation (PE) condition, a novel PE-free updating law is designed by using the experience replay method. Then, the distributed event-triggered nonlinear system is expressed as an impulsive dynamical system. After analyzing the stability, the developed strategy ensures the uniformly ultimately bounded (UUB) of all the closed-loop signals. Moreover, the minimal intersample time is proved to be lower bounded, which avoids the infamous Zeno behavior. Finally, the simulation results show that the number of controller update is reduced obviously, which saves the computational and communication resources.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.