Abstract

Multi-exposure image fusion is proposed as a fast and efficient way to achieve HDR images, as current commercial imaging devices are not able to record high dynamic range requirements well. However, existing multi-exposure image fusion algorithms suffer from long fusion time and large data storage. In this paper, a deep learning based multi-exposure image fusion algorithm is proposed. In this algorithm, two sequences of extreme exposure images are sent to the network, a two-channel and spatial attention mechanism are introduced to automatically learn and optimise the weights, and the best fusion weights are output. In addition, the model is trained with real values and the output is made to more closely resemble the real image by a custom loss function. Experimental results show that the multi-exposure image fusion network designed in this paper outperforms existing networks in both objective and subjective aspects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call