Abstract

This study focuses on addressing optimal bipartite consensus control (OBCC) problems in heterogeneous multi-agent systems (MASs) without relying on the agents' dynamics. Motivated by the need for model-free and optimal consensus control in complex MASs, a novel distributed scheme utilizing reinforcement learning (RL) is proposed to overcome these challenges. The MAS network is randomly partitioned into sub-networks where agents collaborate within each subgroup to attain tracking control and ensure convergence of positions and speeds to a common value. However, agents from distinct subgroups compete to achieve diverse tracking objectives. Furthermore, the heterogeneous MASs considered have unknown first and second-order dynamics, adding to the complexity of the problem. To address the OBCC issue, the policy iteration (PI) algorithm is used to acquire solutions for discrete-time Hamilton-Jacobi-Bellman (HJB) equations while implementing a data-driven actor-critic neural network (ACNN) framework. Ultimately, the accuracy of our proposed approach is confirmed through the presentation of numerical simulations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call