Abstract

Visible images provide abundant texture details and environmental information, while infrared images benefit from night-time visibility and suppression of highly dynamic regions; it is a meaningful task to fuse these two types of features from different sensors to generate an informative image. In this article, we propose an unsupervised end-to-end learning framework for infrared and visible image fusion. We first construct enough benchmark training datasets using the visible and infrared frames, which can address the limitation of the training dataset. Additionally, due to the lack of labeled datasets, our architecture is derived from a robust mixed loss function that consists of the modified structural similarity (M-SSIM) metric and the total variation (TV) by designing an unsupervised learning process that can adaptively fuse thermal radiation and texture details and suppress noise interference. In addition, our method is an end to end model, which avoids setting hand-crafted fusion rules and reducing computational cost. Furthermore, extensive experimental results demonstrate that the proposed architecture performs better than state-of-the-art methods in both subjective and objective evaluations.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.