Abstract

Autonomous vehicle navigation in shared pedestrian environments requires the ability to predict future crowd motion as well as understand human behaviour. However, most existing methods predict pedestrian future motion without considering potential collisions within the crowd. Furthermore, most current predictive models are tested on datasets that assume full observability of the crowd by relying on a top-down view, which does not reflect the real-world use case of autonomous vehicles due to the inherent limitations of on-board sensors such as visual occlusion. Inspired by prior works, we propose a pedestrian motion prediction model trained via contrastive learning, improving prediction accuracy as well as forecasting collision-free trajectories. Additionally, we propose a method for implementing a predictor using a multi-pedestrian probabilistic tracker, which fuses multiple on-board sensors to track pedestrians in 3D space. Through comprehensive experiments on both aerial view and driving datasets collected in a real-world urban environment, we show that our proposed method improves on state of art methods with better prediction accuracy and more socially acceptable prediction trajectories.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call