Infrared and visible image fusion holds significant value across various fields due to its ability to provide complementary information. However, existing fusion algorithms often suffer from susceptibility to external physical conditions of source images, leading to inconsistent fusion quality and limited adaptive capability. In this paper, we propose a fractional wavelet transform fusion algorithm for infrared and visible images. This algorithm uses fractional wavelet domain image decomposition to more accurately locate the image structures of different scales. Initially, we perform image decomposition using a non-subsampled shearlet transform (NSST) with multi-scale and multi-directional decomposition. Subsequently, to effectively fuse detailed information, we employ discrete fractional wavelet transform (DFRWT) to decompose low-frequency subbands while selecting an appropriate p-value. High-frequency subbands are fused using a parameterized adaptive pulse-coupled neural network (PCNN) model. To validate the superiority of our algorithm, we conduct visual and quantitative comparative analyses against several existing algorithms. The results confirm that our proposed algorithm exhibits robustness against external physical conditions and surpasses these existing algorithms, making it highly applicable for infrared–visible fusion tasks.
Read full abstract