Abstract

The dynamic graphical game is a special class of games where agents interact within a communication graph. This paper introduces an online model-free adaptive learning solution for dynamic graphical games. A reinforcement learning is applied in the form solutions to a set of modified coupled Bellman equations. The technique is implemented in a distributed fashion using the local neighborhood information without having a priori knowledge about the agents’ dynamics. This is accomplished by means of adaptive critics, where a multi-layer perceptron neural network is applied to approximate the online solution. To this end, a novel coupled Riccati equation is developed for the graphical game. The validity of the proposed online adaptive learning solution is tested using a graphical example, where follower agents learn to synchronize their behavior to follow a leader.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call