Abstract

In recent years, image fusion has emerged as an important research field due to its various applications. Images acquired by different sensors have significant differences in feature representation due to the different imaging principles. Taking visible and infrared image fusion as an example, visible images contain abundant texture details with high spatial resolution. In contrast, infrared images can obtain clear target contour information according to the principle of thermal radiation, and work well in all day/night and all weather conditions. Most existing methods employ the same feature extraction algorithm to get the feature information from visible and infrared images, ignoring the differences among these images. Thus, this paper proposes what we believe to be a novel fusion method based on a multi-level image decomposition method and deep learning fusion strategy for multi-type images. In image decomposition, we not only utilize a multi-level extended approximate low-rank projection matrix learning decomposition method to extract salient feature information from both visible and infrared images, but also apply a multi-level guide filter decomposition method to obtain texture information in visible images. In image fusion, a novel fusion strategy based on a pretrained ResNet50 network is presented to fuse multi-level feature information from both visible and infrared images into corresponding multi-level fused feature information, so as to improve the quality of the final fused image. The proposed method is evaluated subjectively and objectively in a large number of experiments. The experimental results demonstrate that the proposed method exhibits better fusion performance than other existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call