Traditional infrared (IR) and visible (VIS) image fusion methods demand identical resolution levels for source images, which can be problematic due to the inherent low-resolution nature of IR imagery. In this paper, we introduce an innovative image fusion approach that harmonizes resolution across IR-VIS source images, leading to the generation of fused images with higher resolution. We employ a convolutional neural network model to recover the real-time image degradations in IR data, with a particular focus on super-resolution through the multi degradation resolution enhancement network (MDREN). We adopt undecimated dual-tree complex wavelet transform (UDT-CWT) in our fusion process due to its near shift invariance and better directionality capabilities. This results in coherent information of the fused images with minimized noise and loss. Experiments employing five image quality assessment measures are used to compare the proposed method to nine state-of-the-art approaches and show its efficacy.
Read full abstract