Abstract

The image fusion community is thriving with the wave of deep learning, and the most popular fusion methods are usually built upon well-designed network structures. However, most of the current methods do not fully exploit deeper features while ignore the importance of long-range dependencies. In this paper, a convolution and vision Transformer-based multi-scale parallel cross fusion network for infrared and visible images is proposed (MPCFusion). To exploit deeper texture details, a feature extraction module based on convolution and vision Transformer is designed. With a view to correlating the shallow features between different modalities, a parallel cross-attention module is proposed, in which a parallel-channel model efficiently preserves the proprietary modal features, followed by a cross-spatial model that ensures the information interactions between the different modalities. Moreover, a cross-domain attention module based on convolution and vision Transformer is proposed to capturing long-range dependencies between in-depth features and effectively solves the problem of global context loss. Finally, a nest-connection based decoder is used for implementing feature reconstruction. In particular, we design a new texture-guided structural similarity loss function to drive the network to preserve more complete texture details. Extensive experimental results illustrate that MPCFusion shows excellent fusion performance and generalization capabilities. The source code will be released at https://github.com/YQ-097/MPCFusion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call