Abstract

Generative adversarial network (GAN) has been widely applied to infrared and visible image fusion. However, the existing GAN-based image fusion methods only establish one discriminator in the network to make the fused image capture gradient information from the visible image, which may result in the loss of some infrared intensity information and texture information on the fused images. To solve this problem and improve the performance of GAN, we extend GAN to multiple discriminators and propose an end-to-end multi-discriminators Wasserstein generative adversarial network (MD-WGAN). In this framework, the fused image can preserve major infrared intensity and detail information from the first discriminator, and keep more texture information that existing in visible image from the second discriminator. We also design a texture loss function via local binary patterns to preserve more texture from visible image. The extensive qualitative and quantitative experiments show the advantages of our method compared with other state-of-the-art fusion methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.