Abstract
In this paper, the distributed zero-sum differential game problem for multi-agent nonlinear systems is investigated by using adaptive dynamic programming (ADP) technique, which brings together the distributed control, zero-sum differential graphical game and ADP. To find the cooperative Nash-equilibrium solutions of the associated coupled Hamilton-Jacobi-Isaacs (HJI) equations, the ADP technique is employed, in which a critic network is constructed to approximate the cooperative cost function online with a novel updating law. By utilizing the Lyapunov direct method, the closed-loop system and weight estimation errors are guaranteed to be uniformly ultimately bounded (UUB). Finally, simulation results demonstrate the effectiveness of the proposed method.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.