Most traditional infrared and visible image fusion methods often ignore weak texture details, especially for the low-light visible images in which the weak details are easily drowned due to noise or ill-illumination. To address this problem, we propose a novel infrared and low-light visible image fusion method from the perspective of low-light visible image enhancement, weak feature extraction strategy and detail preserved fusion rules. By combining both local and global contrast enhancements, an adaptive light adjustment algorithm is proposed to improve the brightness and texture details of low-light visible images. In addition, we design a hybrid multiscale decomposition model based on guided filters (GFs) and side window guided filters (SWGFs) to decompose the source images into the base layer, large-scale detail layers and small-scale detail layers. The three layers reflects the background, large edge structures and weak texture details of source images respectively. Subsequently, the visual saliency retention, normalized arctan function, and edge preservation-based consistency verification are applied to highlight salient targets and retain weak details for three layers fusion. Qualitative and quantitative experimental results on publicly available datasets prove the superiority of our method over state-of-the-art methods in terms of highlight salient targets, avoiding edge blurring, and retaining weak details.
Read full abstract