Abstract

LiDAR and camera are two common vision sensors used in the real world, producing complementary point cloud and image data. While multimodal data has previously been found mostly in 3D detection and tracking, we aim to study large-scale semantic segmentation by multimodal data fusion rather than only knowledge transfer or distillation. We show that fusing LiDAR features with camera features and abandoning the strict point-to-pixel hard correlation can lead to better performance. Even so, it is still difficult to make full use of multimodal data due to the spatiotemporal misalignment of sensors and uneven data distribution.. To address this issue, we propose the Joint Semantic Segmentation (JoSS), a powerful LiDAR-camera fusion solution that employs the attention mechanism to explore the potential relationships between point clouds and images. Specifically, JoSS consists of commonly used 3D and 2D backbones, and lightweight transformer decoders based on point clouds and images. A point cloud decoder adopts queries to analyze the semantics from LiDAR features, and an image decoder adaptively fuses these queries with corresponding image features. Both exploit contextual information, thus fully mining multimodal information for semantic segmentation. In addition, we propose an effective unimodal data augmentation (UDA) method that performs cross-modal contrastive learning on point clouds and images to significantly improve accuracy by augmenting the point cloud alone without the complexity of generating paired samples of both modalities. Our Joss achieves advanced results in two widely used large-scale benchmarks, i.e. SemanticKITTI and nuScenes-lidarseg.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call