Abstract

This paper proposes a novel generative adversarial network (GAN) architecture to fuse infrared (IR) and visible images (VIS), named as LBP-BEGAN. The fused images generated by this network have rich boundary information through a loss function based on local binary patterns (LBP). At the same time, a distribution-based discriminator is applied to distinguish the fused images and the original IR and VIS images to guarantee the quality of the fusion results. This structure is able to establish adversarial loss without an idealfused image as the label. Qualitative and quantitative comparisons against eight classical and state-of-the-art fusion methods demonstrate the effectiveness of our strategy. Our approach can generate fused images with clear edges and textures and successfully preserves a large amount of information in the original images.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call