Effectively fusing infrared and visible images enhances the visibility of infrared target information while capturing visual details. Balancing the brightness and contrast of the fusion image adequately has posed a significant challenge. Moreover, preserving detailed information in fusion images has been problematic. To address these issues, this paper proposes a fusion algorithm based on multi-scale decomposition and adaptive contrast enhancement. Initially, we present a hybrid multi-scale decomposition method aimed at extracting valuable information comprehensively from the source image. Subsequently, we advance an adaptive base layer optimization approach to regulate the brightness and contrast of the resultant fusion image. Lastly, we design a weight mapping rule grounded in saliency detection to integrate small-scale layers, thereby conserving the edge structure within the fusion outcome. Both qualitative and quantitative experimental results affirm the superiority of the proposed method over 11 state-of-the-art image fusion methods. Our method excels in preserving more texture and achieving higher contrast, which proves advantageous for monitoring tasks.
Read full abstract