Abstract

Trajectory prediction is a crucial element of many automated tasks, such as autonomous navigation or video surveillance. To automatically predict the motion of an agent (e.g., pedestrian or car), the model needs to efficiently represent human motion and “understand” the external stimuli that may influence human behavior. In this work we propose a methodology to model the motion of agents in a video scene. Our method is based on space-varying sparse motion fields, which simultaneously characterize diverse motion patterns in the scene and implicitly learn contextual cues about the static environment, namely obstacles and semantic constraints. The sparse motion fields are applied to the task of long-term trajectory prediction using a probabilistic generative approach. Several benchmark data sets are used to demonstrate the potential of the proposed approach and show that our method achieves competitive state-of-the-art performances.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call