Abstract
The fusion of infrared and visible images can utilize the indication characteristics and the textural details of source images to realize the all-weather detection. The deep learning (DL) based fusion solutions can reduce the computational cost and complexity compared with traditional methods since there is no need to design complex feature extraction methods and fusion rules. There are no standard reference images and the publicly available infrared and visible image pairs are scarce. Most supervised DL-based solutions have to take pre-training on other labeled large datasets which may not behave well when testing. The few unsupervised fusion methods can hardly obtain ideal images with good visual impression. In this paper, an infrared and visible image fusion method based on unsupervised convolutional neural network is proposed. When designing the network structure, densely connected convolutional network (DenseNet) is used as the sub-network for feature extraction and reconstruction to ensure that more information of source images can be retained in the fusion images. As to loss function, the perceptual loss is creatively introduced and combined with the structure similarity loss to constrain the updating of weight parameters during the back propagation. The perceptual loss designed helps to improve the visual information fidelity (VIF) of the fusion image effectively. Experimental results show that this method can obtain fusion images with prominent targets and obvious details. Compared with other 7 traditional and deep learning methods, the fusion results of this method are better on objective evaluation and visual observation when taken together.
Highlights
Image fusion technology can realize the information synthesis of multi-source images, involving sensor imaging, image preprocessing, image transformation, computer vision, artificial intelligence and other research fields
A kind of infrared and visible image fusion method based on unsupervised convolutional neural network (CNN) with perceptual loss is proposed and the main contributions of our research can be shown as follows:
Yan [29] proposed an unsupervised deep multi-focus image fusion (MFIF) method trained on cropped source image pairs without any pre-training. This end-to-end model had a stronger ability of extracting features, which was important for image reconstruction
Summary
Image fusion technology can realize the information synthesis of multi-source images, involving sensor imaging, image preprocessing, image transformation, computer vision, artificial intelligence and other research fields. A kind of infrared and visible image fusion method based on unsupervised convolutional neural network (CNN) with perceptual loss is proposed and the main contributions of our research can be shown as follows:. Yan [29] proposed an unsupervised deep MFIF method trained on cropped source image pairs without any pre-training This end-to-end model had a stronger ability of extracting features, which was important for image reconstruction. It cannot extract and preserve the unique information of the source images obtained by different imaging sensors Another issue is that pre-training on other large dataset is generally necessary to learn the parameters of encoder and decoder. METHOD In the following content, the design details of infrared and visible image fusion method based on unsupervised convolutional neural network will be introduced. Conv represents the convolution operation, Net represents input or output layer of the network, Concat represents the channel concatenation
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.