Point cloud segmentation is essential for scene understanding, which provides advanced information for many applications, such as autonomous driving, robots, and virtual reality. To improve the accuracy and robustness of point cloud segmentation, many researchers have attempted to fuze camera images to complement the color and texture information. The common fusion strategy is the combination of convolutional operations with concatenation, element-wise addition or element-wise multiplication. However, conventional convolutional operators tend to confine the fusion of modal features within their receptive fields, which can be incomplete and limited. In addition, the inability of encoder–decoder segmentation networks to explicitly perceive segmentation boundary information results in semantic ambiguity and classification errors at object edges. These errors are further amplified in point cloud segmentation tasks, significantly affecting the accuracy of point cloud segmentation. To address the above issues, we propose a novel self-attention multi-modal fusion semantic segmentation network for point cloud semantic segmentation. Firstly, to effectively fuze different modal features, we propose a self-cross fusion module (SCF), which models long-range modality dependencies and transfers complementary image information to the point cloud to fully leverage the modality-specific advantages. Secondly, we design the salience refinement module (SR), which calculates the importance of channels in the feature maps and global descriptors to enhance the representation capability of salient modal features. Finally, we propose the local-aware anisotropy loss measure the element-level importance in the data and explicitly provide boundary information for the model, which alleviates the inherent semantic ambiguity problem in segmentation networks. Extensive experiments on two benchmark datasets demonstrate that our proposed method surpasses current state-of-the-art methods.
Read full abstract