Abstract

Pedestrian trajectory prediction is a critical research area with numerous domains, e.g., blind navigation, autonomous driving systems, and service robots. There exist two challenges in this research field: spatio-temporal interaction modeling among pedestrians and the uncertainty of pedestrian trajectories. To tackle these challenges, we propose a spatio-temporal interaction aware and trajectory distribution aware graph convolution network. First, we propose a spatio-temporal interaction aware module that integrates a graph convolutional network and self-attention mechanism to model spatio-temporal interactions among pedestrians. Second, we design a trajectory distribution aware module to learn latent trajectory distribution information from the measured trajectories at observed and future times. This can provide knowledge-rich trajectory distribution information for the multimodality of the predicted trajectories. Finally, to address the problem of the propagation and accumulation of prediction errors, we design a trajectory decoder to generate the multimodal future trajectories. The proposed model is evaluated utilizing videos recorded by a camera sensor in crowded areas and can be applied to predict multiple pedestrians’ future trajectories from in-vehicle cameras. Experimental results demonstrate that the proposed approach can achieve superior results on the average displacement error (ADE) and final displacement error (FDE) metrics to state-of-the-art approaches and can predict socially acceptable future trajectories.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call