Abstract

Deep learning has achieved great success in computer vision and image processing applications. Several pioneering works also succeeded in image fusion. However, due to the lack of ground truth, deep networks for image fusion are challenging to be well trained, restricting their fusion performance. In this paper, we design a knowledge-guided deep network, Generative-Fusion Network (GeFuNet), which takes natural images as its training data and ground truth. GeFuNet consists of two sub-networks: data generation subnet and image fusion subnet. In training, guided by the knowledge that real infrared and visible images respectively mainly contain contour and texture information, the data generation subnet is trained to generate pseudo infrared and pseudo visible images. The image fusion subnet is then trained using these pseudo images and further supervised by the natural images. The image fusion subnet adopts a hierarchical block to effectively extract and fuse multi-level information from the source images, which is learned by detail enhancement training. During the inference, the trained image fusion subnet is used to fuse real infrared and visible images. The experimental results show that GeFuNet can extract a complete contour and more details from the infrared and visible images, which are fused into the final image. The results also prove the applicability of GeFuNet on multi-scale image fusion, multi-focus image fusion, medical image fusion, and multi-exposure image fusion.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.