Abstract

This article considers the output tracking control problem of nonidentical linear multiagent systems (MASs) using a model-free reinforcement learning (RL) algorithm, where partial followers have no prior knowledge of the leader's information. To lower the communication and computing burden among agents, an event-driven adaptive distributed observer is proposed to predict the leader's system matrix and state, which consists of the estimated value of relative states governed by an edge-based predictor. Meanwhile, the integral input-based triggering condition is exploited to decide whether to transmit its private control input to its neighbors. Then, an RL-based state feedback controller for each agent is developed to solve the output tracking control problem, which is further converted into the optimal control problem by introducing a discounted performance function. Inhomogeneous algebraic Riccati equations (AREs) are derived to obtain the optimal solution of AREs. An off-policy RL algorithm is used to learn the solution of inhomogeneous AREs online without requiring any knowledge of the system dynamics. Rigorous analysis shows that under the proposed event-driven adaptive observer mechanism and RL algorithm, all followers are able to synchronize the leader's output asymptotically. Finally, a numerical simulation is demonstrated to verify the proposed approach in theory.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call