Abstract

Generative adversarial imitation learning (GAIL) regards imitation learning (IL) as a distribution matching problem between the state–action distributions of the expert policy and the learned policy. In this paper, we focus on the generalization and computational properties of policy classes. We prove that the generalization can be guaranteed in GAIL when the class of policies is well controlled. With the capability of policy generalization, we introduce distributional reinforcement learning (RL) into GAIL and propose the greedy distributional soft gradient (GDSG) algorithm to solve GAIL. The main advantages of GDSG can be summarized as: (1) Q-value overestimation, a crucial factor leading to the instability of GAIL with off-policy training, can be alleviated by distributional RL. (2) By considering the maximum entropy objective, the policy can be improved in terms of performance and sample efficiency through sufficient exploration. Moreover, GDSG attains a sublinear convergence rate to a stationary solution. Comprehensive experimental verification in MuJoCo environments shows that GDSG can mimic expert demonstrations better than previous GAIL variants.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call