Abstract

In recent years, deep learning has been widely used in image fusion. Since there is no ground truth, Generative Adversarial Network (GAN) has advantages in this field. Based on GAN, Boundary Equilibrium Generative Adversarial Network (BEGAN) designs a loss evaluation function that only needs to be paired with a simple model design so that the discriminator and generator can be kept in balance and, eventually, excellent images can be generated. When thinking about the field of image fusion, it becomes logical to employ a dual-discriminator to instruct a generator to make the fusion result contain both the thermal radiation information of the infrared image and the texture information of the visible image. Based on the above considerations, we propose a BEGAN-based dual discriminator network model, the so-called D2BEGAN. In this network, the generator uses a dense block to enhance the extraction of information from the source images. After training, generator can achieve end-to-end image fusion. To verify the model's effectiveness, we also performed tests on publicly available datasets, demonstrating that our fusion method could obtain relatively natural fused images while achieving the best metrics compared to many state-of-the-art models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call