Abstract
3D multi-object tracking (MOT) is pivotal for associating the trajectories of objects in autonomous driving. The integration of image and point cloud data in 3D MOT algorithms has emerged as a research hotspot, aiming to achieve both high precision and efficiency tracking. In this paper, we present Smart3DMOT, an novel method that leverages multi-modal detection data and incorporates a cascade matching strategy, intelligently utilizing motion and multi-modal appearance affinity for data association across frames. Our algorithm introduces two association models: the strong motion association model (SM-AM) and the multi-modal fusion feature association model (MMFF-AM), tailored to different tracking scenarios. The SM-AM robustly associates objects by enhancing motion similarity, while the MMFF-AM integrates multi-modal fusion features from images and point clouds to generate appearance similarity between objects and combine motion cues for joint reasoning. Finally, we propose a cascade tracking strategy with a “spatial tracking state confirmer” (STSC). The SM-AM and MMFF-AM models are performed staged matching facilitated by STSC. This intelligent implementation allows SM-AM or MMFF-AM to tackle instances with different tracking difficulties, maintaining the operational efficiency and tracking accuracy of trade-offs. Our approach demonstrates the effectiveness of the proposed modules through comprehensive ablation experiments and the results obtained in the KITTI and nuScenes benchmark show that our method attains state-of-the-art performance in 3D MOT.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.