Abstract

This brief proposes a game-theoretic inverse reinforcement learning (GT-IRL) framework, which aims to learn the parameters in both the dynamic system and individual cost function of multistage games from demonstrated trajectories. Different from the probabilistic approaches in computer science community and residual minimization solutions in control community, our framework addresses the problem in a deterministic setting by differentiating Pontryagin's maximum principle (PMP) equations of open-loop Nash equilibrium (OLNE), which is inspired by Jin et al. (2020). The differentiated equations for a multi-player nonzero-sum multistage game are shown to be equivalent to the PMP equations for another affine-quadratic nonzero-sum multistage game and can be solved by some explicit recursions. A similar result is established for two-player zero-sum games. Simulation examples are presented to demonstrate the effectiveness of our proposed algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call