Image fusion is essential to produce a complementary and comprehensive image, with the source images derived from different sensors, captured from different illumination conditions. In this study, a novel multi-scale image fusion based on the combination of non-subsampled contourlet transform (NSCT) and rolling-guidance filter (RGF) is used to enhance the edges and texture details better than the conventional methods. Initially, infrared (IR) and visible (VIS) source images are multi-scale decomposed to low-frequency and high-frequency sub-band coefficients by NSCT for the best representation of edges and curves. Further, the low-frequency coefficients are decomposed into the base and detail layers by a combination of RGF and GF (Gaussian filter) to retain the features in multiple scales and to reduce halos near the edges. Base layers are fused by saliencybased fusion rule and detail layers are fused by Max absolute rule. High-frequency coefficients are fused by consistency verification based fusion rule to preserve visual details and to suppress noise from source images. Finally, the image is reconstructed by inverse NSCT with good visual perception. Experimental results are evaluated by different evaluation metrics and the results suggest that the proposed method results with better improved source information, clarity and contrast.
Read full abstract