Abstract
Infrared and visible image fusion aims to provide a more comprehensive image for downstream tasks by highlighting the main target and maintaining rich texture information. Image fusion methods based on deep learning suffer from insufficient multimodal information extraction and texture loss. In this paper, we propose a texture-preserving progressive fusion network (PTPFusion) to extract complementary information from multimodal images to solve these issues. To reduce image texture loss, we design multiple consecutive texture-preserving blocks (TPB) to enhance fused texture. The TPB can enhance the features by using a parallel architecture consisting of a residual block and derivative operators. In addition, a novel cross-channel attention (CCA) fusion module is developed to obtain complementary information by modeling global feature interactions via cross-queries mechanism, followed by information fusion to highlight the feature of the salient target. To avoid information loss, the extracted features at different stages are merged as the output of TPB. Finally, the fused image will be generated by the decoder. Extensive experiments on three datasets show that our proposed fusion algorithm is better than existing state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.