Abstract

Multi-exposure image fusion (MEF) is an important technique for generating high dynamic range images. However, most existing MEF studies focus on fusing a moderately over-exposed image and a moderately under-exposed image, and they are not robust in fusing images with extreme and diverse exposure levels. In this paper, we propose a robust MEF framework based on Fourier transform and contrastive learning. Specifically, we develop a Fourier transform-based pixel intensity transfer strategy to synthesize images with diverse exposure levels from normally exposed natural images and train an encoder–decoder network to reconstruct the original natural image. In this way, the encoder and decoder can learn to extract features from images with diverse exposure levels and generate fused images with normal exposure. We propose a contrastive regularization loss to further enhance the capability of the network in recovering normal exposure levels. In addition, we construct an extreme MEF benchmark dataset and a random MEF benchmark dataset for a more comprehensive evaluation of MEF algorithms. We extensively compare our method with fifteen competitive traditional and deep learning-based MEF algorithms on three benchmark datasets, and our method outperforms the other methods in both subjective visual effects and objective evaluation metrics. Our code, datasets and all fused images will be released.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call