Abstract

This paper studies a new class of multi-agent discrete-time dynamical graphical games, where interactions between agents are restricted by a communication graph structure. The paper brings together discrete Hamiltonian mechanics, optimal control theory, cooperative control, game theory, reinforcement learning, and neural network structures to solve the multi-agent dynamical graphical games. Graphical game Bellman equations are derived and shown to be equivalent to certain graphical game Hamilton Jacobi Bellman equations developed herein. Reinforcement Learning techniques are used to solve these dynamical graphical games. Heuristic Dynamic Programming and Dual Heuristic Programming, are extended to solve the graphical games using only neighborhood information. Online adaptive learning structure is implemented using actor-critic networks to solve these graphical games.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.