Abstract

This article considers the output tracking control problem of nonidentical linear multiagent systems (MASs) using a model-free reinforcement learning (RL) algorithm, where partial followers have no prior knowledge of the leader's information. To lower the communication and computing burden among agents, an event-driven adaptive distributed observer is proposed to predict the leader's system matrix and state, which consists of the estimated value of relative states governed by an edge-based predictor. Meanwhile, the integral input-based triggering condition is exploited to decide whether to transmit its private control input to its neighbors. Then, an RL-based state feedback controller for each agent is developed to solve the output tracking control problem, which is further converted into the optimal control problem by introducing a discounted performance function. Inhomogeneous algebraic Riccati equations (AREs) are derived to obtain the optimal solution of AREs. An off-policy RL algorithm is used to learn the solution of inhomogeneous AREs online without requiring any knowledge of the system dynamics. Rigorous analysis shows that under the proposed event-driven adaptive observer mechanism and RL algorithm, all followers are able to synchronize the leader's output asymptotically. Finally, a numerical simulation is demonstrated to verify the proposed approach in theory.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.