Abstract

As camera and LiDAR sensors capture complementary information in autonomous driving, great efforts have been made to conduct semantic segmentation through multi-modality data fusion. However, fusion-based approaches require paired data, i.e., LiDAR point clouds and camera images with strict point-to-pixel mappings, as the inputs in both training and inference stages. It seriously hinders their application in practical scenarios. Thus, in this work, we propose the 2D Priors Assisted Semantic Segmentation (2DPASS) method, a general training scheme, to boost the representation learning on point clouds. The proposed 2DPASS method fully takes advantage of 2D images with rich appearance during training, and then conduct semantic segmentation without strict paired data constraints. In practice, by leveraging an auxiliary modal fusion and multi-scale fusion-to-single knowledge distillation (MSFSKD), 2DPASS acquires richer semantic and structural information from the multi-modal data, which are then distilled to the pure 3D network. As a result, our baseline model shows significant improvement with only point cloud inputs once equipped with the 2DPASS. Specifically, it achieves the state-of-the-arts on two large-scale recognized benchmarks (i.e., SemanticKITTI and NuScenes), i.e., ranking the top-1 in both single and multiple scan(s) competitions of SemanticKITTI.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.