Abstract

In this paper, we present an end-to-end architecture for multi-exposure image fusion based on generative adversarial networks, termed as MEF-GAN. In our architecture, a generator network and a discriminator network are trained simultaneously to form an adversarial relationship. The generator is trained to generate a real-like fused image based on the given source images which is expected to fool the discriminator. Correspondingly, the discriminator is trained to distinguish the generated fused images from the ground truth. The adversarial relationship makes the fused image not limited to the restriction of the content loss. Therefore, the fused images are closer to the ground truth in terms of probability distribution, which can compensate for the insufficiency of single content loss. Moreover, aiming at the problem that the luminance of multi-exposure images varies greatly with spatial location, the self-attention mechanism is employed in our architecture to allow for attention-driven and long-range dependency. Thus, local distortion, confusing results, or inappropriate representation can be corrected in the fused image. Qualitative and quantitative experiments are performed on publicly available datasets, where the results demonstrate that MEF-GAN outperforms the state-of-the-art, in terms of both visual effect and objective evaluation metrics. Our code is publicly available at https://github.com/jiayi-ma/MEF-GAN .

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.