Abstract

This paper brings together cooperative control, reinforcement learning, and game theory to present a multi-agent distributed formulation for graphical games. The notion of graphical games is developed for dynamical systems, where the dynamics and performance indices for each node depend only on local neighbor information. We propose a cooperative policy iteration algorithm for graphical games. This algorithm converges to the best response when the neighbors of each agent do not update their policies and to the Nash equilibrium when all agents update their policies simultaneously. It is also shown that the convergence of this algorithm is based on the speed of convergence of the neighbors of every player in the graph, graph topology, and user defined matrices in the performance index. This framework will be used to develop methods for online adaptive learning solutions of graphical games in real time.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call