Abstract

Recent works on single image high dynamic range (HDR) reconstruction fail to hallucinate plausible textures, resulting in information missing and artifacts in large-scale under/over-exposed regions. In this article, a decoupled kernel prediction network is proposed to infer an HDR image from a low dynamic range (LDR) image. Specifically, we first adopt a simple module to generate a preliminary result, which can precisely estimate well-exposed HDR regions. Meanwhile, an encoder-decoder backbone network with a soft mask guidance module is presented to predict pixel-wise kernels, which is further convolved with the preliminary result to obtain the final HDR output. Instead of traditional kernels, our predicted kernels are decoupled along the spatial and channel dimensions. The advantages of our method are threefold at least. First, our model is guided by the soft mask so that it can focus on the most relevant information for under/over-exposed regions. Second, pixel-wise kernels are able to adaptively solve the different degradations for differently exposed regions. Third, decoupled kernels can avoid information redundancy across channels and reduce the solution space of our model. Thus, our method is able to hallucinate fine details in the under/over-exposed regions and renders visually pleasing results. Extensive experiments demonstrate that our model outperforms state-of-the-art ones.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call