Regarding the problems of image distortion, edge blurring, Gibbs phenomena in the traditional wavelet transform algorithm and the loss of subtle features in the Non-Subsampled Shearlet Transform (NSST), and considering the physical characteristics of infrared and visible images, an infrared and visible image fusion algorithm based on the Lifting Stationary Wavelet Transform (LSWT) and Non-Subsampled Shearlet Transform is proposed in this paper. First, since LSWT can quickly calculate and has all advantages of traditional WT, it is utilized to decompose infrared and visible images to obtain low-frequency coefficients and multi-scale and multi-directional high-frequency coefficients, respectively. Second, NSST multi-scale decomposition is used to extract the target features and detailed features of the image from the high and low-frequency sub-bands to obtain new high and low-frequency sub-bands. Third, according to the physical characteristics that low and high-frequency coefficients represent, different fusion rules are designed. Discrete Cosine Transform (DCT) and Local Spatial Frequency (LSF) are introduced in the low-frequency sub-band, and LSF adaptive weighted fusion rules are used in the DCT domain. The fusion strategy improves the regional contrast in the high-frequency sub-band with the spectral characteristics of human vision. Finally, the Inverse Lifting Stationary Wavelet Transform (ILSWT) is used to reconstruct the fusion coefficients to obtain the final fused images. To verify the advantages of the proposed algorithm in this paper, the classic and advanced 9 IR and VI fusion algorithms are selected for subjective and objective comparison. In the objective evaluation, a comprehensive ranking index is designed based on 9 classical indicators. Simulation experiments with 10 IR and VI fusion algorithms prove that the proposed algorithm has better performance and flexibility. The results show that the proposed algorithm in this paper fuses the images with clear edges, prominent targets, and good visual perception, and it outperforms state-of-the-art image fusion algorithms.
Read full abstract