Abstract
Infrared and visible image fusion (IVIF) facing different information in two modal scenarios, the focus of research is to better extract different information. In this work, we propose a multi-stage self-supervised learning based cross-modality contrastive representation network for infrared and visible image fusion (MSL-CCRN). Firstly, considering that the scene differences between different modalities affect the fusion of cross-modal images, we propose a contrastive representation network (CRN). CRN enhances the interaction between the fused image and the source image, and significantly improves the similarity between the meaningful features in each modality and the fused image. Secondly, due to the lack of ground truth in IVIF, the quality of directly obtained fused image is seriously affected. We design a multi-stage fusion strategy to address the loss of important information in this process. Notably, our method is a self-supervised network. In fusion stage I, we reconstruct the initial fused image as the new view of fusion stage II. In fusion stage II, we use the fused image obtained in the previous stage to carry out three-view contrastive representation, thereby constraining the feature extraction of the source image. This makes the final fused image introduce more important information in the source image. Through a large number of qualitative, quantitative experiments and downstream object detection experiments, our propose method shows excellent performance compared with most advanced methods.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have