Abstract

Existing studies on infrared and visible image fusion generally need to first decompose the fusion image and then extract features beneficial to image fusion from the decomposition results for better fusion results. However, they usually focus on decomposing single-modality images though various techniques such as latent low-rank representation (LatLRR), without considering the spatial consistency of both infrared and visible image modalities, which may fail to effectively capture inherent image features. In this paper, we propose a sparse consistency constrained latent low-rank representation (SccLatLRR) method to fuse infrared and visible images. Firstly, infrared and visible images are performed low-rank representation decomposition simultaneously as the inputs of different tasks. In the decomposition process, the L2,1 norm is used to constrain the rank to maintain sparse consistency, and the low-rank consensus representations of infrared and visible images are obtained simultaneously. Secondly, the basic information is further mined respectively using the Very Deep Convolutional Network (VGG) network and non-subsampled contourlet transforms (NSCT) method to extract more effective fusion features. Finally, different fusion strategies are used for the base part and the salient part. An effective iterative algorithm optimization model is proposed. Experimental results on the public dataset TNO suggest the effectiveness of our method compared with several state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call