The aim of infrared and visible image fusion is to produce a composite image that can highlight the infrared targets and maintain plentiful detailed textures simultaneously. Despite the promising fusion performance of current deep-learning-based algorithms, most fusion algorithms highly depend on convolution operations, which limits their capability to represent long-range contextual information. To overcome this challenge, we design a novel infrared and visible image fusion network based on Res2Net and multiscale Transformer, called RMTFuse. Specifically, we devise a local feature extraction module based on Res2Net (LFE-RN) in which dense connections are adopted to reuse the information that might be lost in convolution operation and a global feature extraction module based on multiscale Transformer (GFE-MT) which is composed of a Transformer module and a global feature integration module (GFIM). The Transformer module extracts the coarse-to-fine semantic features of the source images, while GFIM is used to further aggregate the hierarchical features to strengthen contextual feature representations. Furthermore, we employ the pre-trained VGG-16 network to compute the loss of features with different depths. Massive experiments on mainstream datasets indicate that RMTFuse is superior to the state-of-the-art methods in both subjective and objective assessments.
Read full abstract