Abstract

Semantic segmentation methods can achieve satisfactory performance under poor lighting conditions by exploiting the complementary cues in RGB and thermal images. However, most methods employ straightforward fusion strategies, which may insufficiently explore complementary information and ignore cross-level information propagation, except spatial information. Further, high-level contextual information may be inadequately enhanced owing to the use of simple perceptive modules. To address these limitations, we introduce a grid-like context-aware network (GCNet) for the semantic segmentation of RGB-thermal images. We use a hybrid fusion module to integrate the complementary information across modalities while considering the propagation of fusion cues by incorporating previously fused features. Considering the significance of contextual cues in semantic segmentation, a grid-like context-aware module is designed to capture rich contextual information. A three-branch discriminator is constructed to evaluate the generated prediction maps and improve the performances of the parsing results. Experiments were performed on two RGB-thermal datasets, and the results show that the proposed network achieves state-of-the-art performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call