Abstract

In this work, we investigate consensus issues of discrete-time (DT) multi-agent systems (MASs) with completely unknown dynamic by using reinforcement learning (RL) technique. Different from policy iteration (PI) based algorithms that require admissible initial control policies, this work proposes a value iteration (VI) based model-free algorithm for consensus of DTMASs with optimal performance and no requirement of admissible initial control policy. Firstly, in order to utilize RL method, the consensus problem is modeled as an optimal control problem of tracking error system for each agent. Then, we introduce a VI algorithm for consensus of DTMASs and give a novel convergence analysis for this algorithm, which does not require admissible initial control input. To implement the proposed VI algorithm to achieve consensus of DTMASs without information of dynamics, we construct actor-critic networks to online estimate the value functions and optimal control inputs in real time. At last, we give some simulation results to show the validity of the proposed algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.