Abstract

In this paper, a distributed control algorithm based on a data-driven approach is developed to solve the consensus control of heterogeneous nonlinear Multi-Agent Systems (MAS). The consensus obtained from the solution of the Hamilton–Jacobi–Bellman (HJB) equation is challenging for unknown nonlinear systems. To address this issue, improved online reinforcement learning (RL) is employed to generate an approximate solution for each agent to achieve consensus. Unlike model-based RL and traditional algorithms, this method leverages I/O data to guide the learning of policies without any prior knowledge of agent dynamics. Furthermore, the adaptability of algorithm to heterogeneous nonlinear agents is enhanced by implementing online updates to the control strategy and dynamic linearization (DL). The convergence analysis of the algorithm is provided, along with the impact of the learning rateparameters on the consensus of MAS. By comparing with other data-driven methods, simulations are conducted to verify the stability and adaptability of the proposed algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call