Abstract

To achieve the consensus for discrete-time multi-agent systems, an optimal control policy is designed based on off-policy reinforcement learning. By utilizing centralized learning and decentralized execution, we first define a centralized and shared value function. Then, a value iteration adaptive dynamic programming method is proposed to approach the solution of the Bellman optimality equation with convergence analysis. Furthermore, the actor-critic structure is given for the implementation purpose, where one single-critic network is given to approach the optimal centralized value function, and multi-actor networks are decentralized based on the local observation from the neighbors to obtain the optimal policy for each agent. Finally, the proposed algorithm is verified in a leader-follower consensus case.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.