Abstract

The fusion of RGB and thermal images has profound implications for the semantic segmentation of challenging urban scenes, such as those with poor illumination. Nevertheless, existing RGB-T fusion networks pay less attention to modality differences; i.e., RGB and thermal images are commonly fused with fixed weights. In addition, spatial context details are lost during regular extraction operations, inevitably leading to imprecise object segmentation. To improve the segmentation accuracy, a novel network named spatial feature aggregation and fusion with modality adaptation (SFAF-MA) is proposed in this paper. The modality difference adaptive fusion (MDAF) module is introduced to adaptively fuse RGB and thermal images with corresponding weights generated from an attention mechanism. In addition, the spatial semantic fusion (SSF) module is designed to tap into more information by capturing multiscale perceptive fields with dilated convolutions of different rates, and aggregate shallower-level features with rich visual information and deeper-level features with strong semantics. Compared with existing methods on the public MFNet dataset and PST900 dataset, the proposed network significantly improves the segmentation effectiveness. The code is available at https://github.com/hexunjie/SFAF-MA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call