Abstract

This study focuses on addressing optimal bipartite consensus control (OBCC) problems in heterogeneous multi-agent systems (MASs) without relying on the agents' dynamics. Motivated by the need for model-free and optimal consensus control in complex MASs, a novel distributed scheme utilizing reinforcement learning (RL) is proposed to overcome these challenges. The MAS network is randomly partitioned into sub-networks where agents collaborate within each subgroup to attain tracking control and ensure convergence of positions and speeds to a common value. However, agents from distinct subgroups compete to achieve diverse tracking objectives. Furthermore, the heterogeneous MASs considered have unknown first and second-order dynamics, adding to the complexity of the problem. To address the OBCC issue, the policy iteration (PI) algorithm is used to acquire solutions for discrete-time Hamilton-Jacobi-Bellman (HJB) equations while implementing a data-driven actor-critic neural network (ACNN) framework. Ultimately, the accuracy of our proposed approach is confirmed through the presentation of numerical simulations.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.