Abstract

The main challenge of the trajectory generation problem is to generate long-term as well as diverse trajectories. Generative Adversarial Imitation Learning (GAIL) is a well-known model-free imitation learning algorithm that can be utilized to generate trajectory data, while vanilla GAIL would fail to capture multi-modal demonstrations. Recent methods propose latent variable models to solve this problem; however, previous works may have a mode missing problem. In this work, we propose a novel method to generate long-term trajectories that are controllable by a continuous latent variable based on GAIL and a conditional Variational Autoencoder (cVAE). We further assume that subsequences of the same trajectory should be encoded to similar locations in the latent space. Therefore, we introduce a contrastive loss in the training of the encoder. In our motion synthesis task, we propose to first construct a low-dimensional motion manifold by using a VAE to reduce the burden of our imitation learning model. Our experimental results show that the proposed model outperforms the state-of-the-art methods and can be applied to motion synthesis.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.