Abstract

Infrared and visible image fusion aims to generate the desired fusion image by fusing complementary images from different sensors. Artificially generated images are more appropriate for human visual perception or further image-processing tasks. Although a variety of infrared and visible image fusion methods have been proposed in recent years, the degradation of the intermediate features and the loss of details in the network are still difficult to solve, resulting in the loss of details and generation of artifacts in the fused images. In this paper, a symmetrical skip attention network is constructed to solve these problems. The skip attention mechanism used in our network can compensate for the information lost in the feature extraction stage, which can reduce the loss of details in the fused images effectually. Meanwhile, we designed a weight block to calculate the information weight in the loss function. Thus, the network can retain source image information adaptively. A U-net with self-attention is designed to achieve the feature extraction in the weight block. The self-attention helps the network to extract more detailed features. And we conducted ablation experiments to verify the performance of different modules in the network. Extensive experimental results prove that the proposed fusion RDCa-Net is superior to the latest fusion methods in subjective and objective evaluation. In addition, we apply the fused images generated by our method to object detection. Compared with other fusion algorithms, our method has a higher confidence level for both infrared and visible targets. It proves the potential of our method in promoting advanced visual tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call