Abstract

In this paper, we propose a novel deep decomposition approach based on Retinex theory for multi-exposure image fusion, termed as DMEF. According to the assumption of Retinex theory, we firstly decompose the source images into illumination and reflection maps by the data-driven decomposition network, among which we introduce the pathwise interaction block that reactivates the deep features lost in one path and embeds them into another path. Therefore, loss of illumination and reflection features during decomposition can be effectively suppressed. And then the high dynamic range illumination map could be obtained by fusing the separated illumination maps in the fusion network. Thus, the reconstructed details in under-exposed and over-exposed regions will be clearer with the help of the fused reflection map which contains complete high-frequency scene information. Finally, the fused illumination and reflection maps are multiplied pixel-by-pixel to obtain the final fused image. Moreover, to retain the discontinuity in the illumination map where gradient of reflection map changes steeply, we introduce the structure-preservation smoothness loss function to retain the structure information and eliminate visual artifacts in these regions. The superiority of our proposed network is demonstrated by applying extensive experiments compared with other state-of-the-art fusion methods subjectively and objectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call