Abstract

Infrared and visible image fusion integrates useful information from different modal images to generate one with comprehensive details and highlighted targets, thereby deepening the interpretation of the scene. However, existing deep-learning based methods do not consider noise, leading to suboptimal noisy fusion results. To address this issue, we propose a decomposition-driven neural network (DeDNet) to achieve joint fusion and noise removal. By introducing constraints between the fused and ground truth source images into the loss function, we develop an autoencoder as the basic fusion and denoising network. Furthermore, we propose a decomposition network that guided the decomposition of the fusion result, improving the denoising and details recovery. Experiments demonstrate DeDNet excels the state-of-the-art methods in objective and subjective evaluations, yielding competing performance in detection and segmentation. On the metrics,Qcb, EN, SSIM, PSNR, and CC, DeDNet average increased 10.92%, 21.13%, 82.97%, 8.55%, and 16.26% than the compared methods, respectively. The source code is available at https://github.com/JasonWong30/DeDNet.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.