Abstract

This paper proposes a new unified inverse reinforcement learning (IRL) framework based on trust-region methods and a recently proposed Pontryagin differential programming (PDP) method in Jin et al. (2020), which aims to learn the parameters in both the system model and the cost function for three types of problems, namely, <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$N$</tex-math></inline-formula> -player nonzero-sum multistage games, 2-player zero-sum multistage games and 1-player optimal control, from demonstrated trajectories. Different from the existing frameworks using gradient to update learning parameters, our framework updates them with the candidate solution of trust-region subproblem ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathsf {TRS}$</tex-math></inline-formula> ), where its required gradient and Hessian are obtained by differentiating Pontryagin's Maximum Principle (PMP) equations once and twice, respectively. The differentiated equations are shown to be equivalent to the PMP equations for affine-quadratic games / optimal control problems and can be solved by some explicit recursions. Extensive simulation examples and comparisons are presented to demonstrate the effectiveness of our proposed algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.