Abstract

Significant advances have been made in designing CNNs for RGB semantic segmentation. However, these CNNs are not widely adopted for RGB-D segmentation, due to the asymmetry between the RGB and depth modalities. Instead, dedicated architectures are designed to fuse them for effective RGB-D segmentation, wherein complex structures are often employed, resulting in much increased computational cost. In this paper, we propose a novel way to learn the fusion of RGB and depth information in an early stage. This enables our method to easily adopt existing RGB segmentation networks with minimal modification. Our method is simple yet effective to build a bridge between RGB and RGBD semantic segmentation, so as to avoid designing a far more complex network structure for RGBD segmentation. The proposed method treats RGB and depth information in an inherently asymmetric manner, and to the best of our knowledge, this is the first approach that learns to fuse them in a multiplicative manner for RGB-D segmentation; thus, we call it RGB×D. Extensive experiments and ablation studies on the challenging NYUDv2, SUN RGB-D and Cityscapes semantic segmentation benchmarks show that the proposed RGB×D offers a consistent improvement over several baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call