Abstract

Single object tracking in point clouds is a fundamental component in enabling autonomous vehicles to understand dynamic traffic environments. Earlier tracking confidence only relies on IoU between two static boxes, ignoring the motion properties of objects, which may weaken the association abilities of trackers. To comprehensively associate an object with the estimated motion state, we introduce a directed representation. This representation factorizes the box of an object into its central position and orientation. To handle under-detection and over-detection problems, we also present an undirected range suppression mechanism that automatically enlarges and stabilizes the view field at the current time step. As a result, we build a single object tracking system that achieves high accuracy and real-time performance. On both KITTI and nuScenes tracking datasets, we demonstrate that our system outperforms other recent single object trackers in both accuracy and speed. Besides, we also validate the superiority of our approach compared to multiple object tracking methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call