Abstract

In this work, we investigate consensus issues of discrete-time (DT) multi-agent systems (MASs) with completely unknown dynamic by using reinforcement learning (RL) technique. Different from policy iteration (PI) based algorithms that require admissible initial control policies, this work proposes a value iteration (VI) based model-free algorithm for consensus of DTMASs with optimal performance and no requirement of admissible initial control policy. Firstly, in order to utilize RL method, the consensus problem is modeled as an optimal control problem of tracking error system for each agent. Then, we introduce a VI algorithm for consensus of DTMASs and give a novel convergence analysis for this algorithm, which does not require admissible initial control input. To implement the proposed VI algorithm to achieve consensus of DTMASs without information of dynamics, we construct actor-critic networks to online estimate the value functions and optimal control inputs in real time. At last, we give some simulation results to show the validity of the proposed algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call