Abstract

In order to make full use of the complementary information of infrared and visible images, this paper proposes an infrared and visible image fusion algorithm based on supervised convolutional neural network (CNN). The network consists of coding layer, fusion layer, decoding layer and output layer. The coding layer contains two convolution layers and two dense modules, which are used to extract the features of the input images. In the fusion layer, an improved algorithm weighted by brightness is used to fuse the extracted feature maps. The decoding layer consists of three convolution layers. In the training process, this paper takes the fusion results of existing literature as the training labels, and improves a loss function based on square loss and structural similarity. The decoded infrared image, decoded visible image and decoded feature image are trained together. The fused image is obtained by weighting these three images. Experimental results show that the proposed method can better preserve both the clear target and detailed information of infrared and visible images. Compared with existing fusion methods, experiments also demonstrate the superiority of the proposed method over the state-of-the-art methods in objective metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call