Abstract

In this paper, a novel multi-exposure image fusion method based on generative adversarial networks (termed as GANFuse) is presented. Conventional multi-exposure image fusion methods improve their fusion performance by designing sophisticated activity-level measurement and fusion rules. However, these methods have a limited success in complex fusion tasks. Inspired by the recent FusionGAN which firstly utilizes generative adversarial networks (GAN) to fuse infrared and visible images and achieves promising performance, we improve its architecture and customize it in the task of extreme exposure image fusion. To be specific, in order to keep content of extreme exposure image pairs in the fused image, we increase the number of discriminators differentiating between fused image and extreme exposure image pairs. While, a generator network is trained to generate fused images. Through the adversarial relationship between generator and discriminators, the fused image will contain more information from extreme exposure image pairs. Thus, this relationship can realize better performance of fusion. In addition, the method we proposed is an end-to-end and unsupervised learning model, which can avoid designing hand-crafted features and does not require a number of ground truth images for training. We conduct qualitative and quantitative experiments on a public dataset, and the experimental result shows that the proposed model demonstrates better fusion ability than existing multi-exposure image fusion methods in both visual effect and evaluation metrics.

Highlights

  • Powered by advanced digital image technology, the effect of image vision is more demanding than ever before

  • In this paper, a novel multi-exposure image fusion method based on generative adversarial networks is presented

  • In order to keep content of extreme exposure image pairs in the fused image, we increase the number of discriminators differentiating between fused image and extreme exposure image pairs

Read more

Summary

Introduction

Powered by advanced digital image technology, the effect of image vision is more demanding than ever before. Traditional methods contain three major steps, including image transformation, activity-level measurement and fusion rule designing [19]. These steps are limited by implementation difficulty and high computational costs. The discriminators are conducted to distinguish between fused image and source images This adversarial process will force the generator to have better performance. As for loss functions, the pixel intensities loss and gradient loss are applied in our network that can help fused image to preserve luminance information and texture information from the source images. We design a new loss function for MEF which can help fused image to preserve more information from source images.

Related works
Fusion methods based on deep learning
The basic theory of GAN
Variants of GAN and their applications
Proposed method
GANFuse
Loss function
Generator
Discriminator
Training
Testing
Experiments
Qualitative comparisons
Quantitative comparisons
Comparative experiment
Conclusion and future work
Compliance with ethical standards
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call