Image fusion is the fusion of multiple images from the same scene to produce a more informative image, and infrared and visible image fusion is an important branch of image fusion. To tackle the issues of diminished luminosity in the infrared target, inconspicuous target features, and blurred texture of the fused image after the fusion of infrared and visible images. This paper introduces a novel effective fusion framework that merges multi-scale Convolutional Neural Networks (CNN) with saliency weight maps. First, the method measures the source image features to estimate the initial saliency weight map. Then, the initial weight map is segmented and optimized using a guided filter before being further processed by CNN. Next, a trained Siamese convolutional network is used to solve the two key problems of activity measure and weight assignment. Meanwhile, a multi-layer fusion strategy is designed to effectively retain the luminance of the infrared target and the texture information in the visible background. Finally, adaptive adjustment of the fusion coefficients is achieved by employing saliency. The experimental results show that the method outperforms the state-of-the-art algorithms in terms of both subjective visual quality and objective evaluation effects.
Read full abstract