Abstract

Pedestrian trajectory prediction is an essential but challenging task. Social interactions between pedestrians have an immense impact on trajectories. A better way to model social interactions generally achieves a more accurate trajectory prediction. To comprehensively model the interactions between pedestrians, we propose a multilevel dynamic spatiotemporal digraph convolutional network (MDST-DGCN). It consists of three parts: a motion encoder to capture the pedestrians' specific motion features, a multilevel dynamic spatiotemporal directed graph encoder (MDST-DGEN) to capture the social interaction features of multiple levels and adaptively fuse them, and a motion decoder to produce the future trajectories. Experimental results on public datasets demonstrate that our model achieves state-of-the-art results in both long-term and short-term predictions for both high-density and low-density crowds.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call