Abstract

Current infrared and visible image fusion (IVIF) methods lack ground truth and require prior knowledge to guide the feature fusion process. However, in the fusion process, these features have not been placed in an equal and well-defined position, which causes the degradation of image quality. To address this challenge, this study develops a new end-to-end model, termed unpaired high-quality image-guided generative adversarial network (UHG-GAN). Specifically, we introduce the high-quality image as the reference standard of the fused image and employ a global discriminator and a local discriminator to identify the distribution difference between the high-quality image and the fused image. Through adversarial learning, the generator can generate images that are more consistent with high-quality expression. In addition, we also designed the laplacian pyramid augmentation (LPA) module in the generator, which integrates multi-scale features of source images across domains so that the generator can more fully extract the structure and texture information. Extensive experiments demonstrate that our method can effectively preserve the target information in the infrared image and the scene information in the visible image and significantly improve the image quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call