Abstract

In this article, we present a flow-based framework for multi-modal trajectory prediction, which is able to provide an accurate and explicit inference of the latent representations on trajectory data. Differently from other typical generative models (such as GAN, VAE, etc.), the flow-based models aim at learning data distribution explicitly through an invertible network, which can convert a complicated distribution into a tractable form via invertible transformations. The whole framework is built upon the standard encoder–decoder architecture, where the LSTM is exploited as the fundamental block to capture the temporal structure of a trajectory. As a core module, we incorporate an invertible network that can learn the multi-modal distributions of trajectory data and further generate plausible future paths by sampling tricks from the standard Gaussian distribution. Extensive experiments carried out on synthetic and realistic datasets demonstrate the effectiveness of the proposed approach, and show the advantages as compared to the GAN-based and the VAE-based prediction frameworks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.