Abstract

A single infrared image or visible image cannot clearly present texture details and infrared information of the scene in poor illumination, bad weather, or other complex conditions. Thus, it is necessary to fuse the infrared and visible images into one image. In this paper, we propose a novel deep fusion architecture for fusing visible and infrared images without any reference ground-truth. Different from existing deep image fusion methods which directly output the fused images, a weight score corresponding to each pixel is estimated by our network to determine the contributions of two source images. This strategy transfers the valuable information in source images to the fused image. Considering the salient thermal radiation information in the infrared image, a mask of the infrared image is generated and used to preserve valuable contents in the infrared and visible images for the fused image. Furthermore, a hybrid loss is designed to make the fused image consistent with two source images. On account of the weight estimation, the mask strategy, and the hybrid loss, the images fused by our proposed method jointly maintain the thermal radiation and texture details, achieving state-of-the-art performance compared with existing fusion approaches. Our code is publicly available at https://github.com/NlCxg/MDFN.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.