Abstract
Camera-LiDAR fusion provides precise distance measurements and fine-grained textures, making it a promising option for 3D vehicle detection in autonomous driving scenarios. Previous camera-LiDAR based 3D vehicle detection approaches mainly focused on employing image-based pre-trained models to fetch semantic features. However, these methods may perform inferior to the LiDAR-based ones when lacking semantic segmentation labels in autonomous driving tasks. Motivated by this observation, we propose a novel semantic augmentation method, namely Sem-Aug, to guide high-confidence camera-LiDAR fusion feature generation and boost the performance of multimodal 3D vehicle detection. The key novelty of semantic augmentation lies in the 2D segmentation mask auto-labeling, which provides supervision for semantic segmentation sub-network to mitigate the poor generalization performance of camera-LiDAR fusion. Using semantic-augmentation-guided camera-LiDAR fusion features, Sem-Aug achieves remarkable performance on the representative autonomous driving KITTI dataset compared to both the LiDAR-based baseline and previous multimodal 3D vehicle detectors. Qualitative and quantitative experiments demonstrate that Sem-Aug provides significant improvements in challenging <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Hard</i> detection scenarios caused by occlusion and truncation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.