In the current research, leveraging auxiliary modalities, such as depth information or point cloud information, to improve RGB semantic segmentation has shown significant potential. However, existing methods mainly use convolutional modules for aggregating features from auxiliary modalities, thereby lacking sufficient exploitation of long-range dependencies. Moreover, fusion strategies are typically limited to singular approaches. In this paper, we propose a transformer-based multimodal fusion framework to better utilize auxiliary modalities for enhancing semantic segmentation results. Specifically, we employ a dual-stream architecture for extracting features from RGB and auxiliary modalities, respectively. We incorporate both early fusion and deep feature fusion techniques. At each layer, we introduce mixed attention mechanisms to leverage features from other modalities, guiding and enhancing the current modality's features before propagating them to the subsequent stage of feature extraction. After the extraction of features from different modalities, we employ an enhanced cross-attention mechanism for feature interaction, followed by channel fusion to obtain the final semantic features. Subsequently, we provide separate supervision to the network on the RGB stream, auxiliary stream, and fusion stream to facilitate the learning of representations for different modalities. The experimental results demonstrate that our framework exhibits superior performance across diverse modalities. Specifically, our approach achieves state-of-the-art results on the NYU Depth V2, SUN-RGBD, DELIVER and MFNet datasets.
Read full abstract