Abstract

In this paper, an Adaptive Fusion Transformer (AFT) is proposed for unsupervised pixel-level fusion of visible and infrared images. Different from the existing convolutional networks, transformer is adopted to model the relationship of multi-modality images and explore cross-modal interactions in AFT. The encoder of AFT uses a Multi-Head Self-attention (MSA) module and Feed Forward (FF) network for feature extraction. Then, a Multi-head Self-Fusion (MSF) module is designed for the adaptive perceptual fusion of the features. By sequentially stacking the MSF, MSA, and FF, a fusion decoder is constructed to gradually locate complementary features for recovering informative images. In addition, a structure-preserving loss is defined to enhance the visual quality of fused images. Extensive experiments are conducted on several datasets to compare our proposed AFT method with 21 popular approaches. The results show that AFT has state-of-the-art performance in both quantitative metrics and visual perception.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.