Abstract

Infrared and visible image fusion aims to generate an image with prominent target information and abundant texture details. Most existing methods generally rely on manually designing complex fusion rules to realize image fusion. Some deep learning fusion networks tend to ignore the correlation between different level features, which may cause loss of intensity information and texture details in the fused image. To overcome these drawbacks, we propose a multi-level hybrid transmission network for infrared and visible image fusion, which mainly contains the multi-level residual encoder module and the hybrid transmission decoder module. Considering the great difference between infrared and visible images, the multi-level residual encoder module with two independent branches is designed to extract abundant features from source images. To avoid complicated fusion strategies, the concatenate-convolution is applied to fuse features. Towards utilizing information from source images efficiently, the hybrid transmission decoder module is constructed to integrate different level features. Experimental results and analyses on three public datasets demonstrate that our method not only can achieve high quality image fusion but also performs better than comparison methods in terms of qualitative and quantitative comparisons. In addition, the proposed method has good real-time performance in infrared and visible image fusion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call