Abstract

Predicting the state of dynamic objects in a real traffic environment is a key issue in autonomous driving vehicles. Various approaches have been proposed to learn the dynamics from visual observations with static background. However, minimal research has been conducted in a real traffic environment due to the complicated and changeable scenes. This paper proposes an adaptive multi-target future state prediction (position/velocity) method under autonomous driving conditions. In particular, an adaptive visual interaction method and control mechanism are introduced to overcome the change in the number of objects in continuous driving frames. In addition, a two-stream architecture with stage-wise learning is utilized for accurate object state prediction by simultaneously complementing spatial and temporal information. Experiments on two public challenging datasets, namely Udacity (CrowdAI) and Udacity (Autti), demonstrate the effectiveness of the proposed method on multi-target dynamic state prediction in a real traffic environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call