Abstract

Existing methods can generate a high dynamic range (HDR) image from a single low dynamic range (LDR) image using convolutional neural networks (CNNs). However, they are too cumbersome to run on mobile devices with limited computational resources. In this work, we design a lightweight CNN, namely LiTMNet which takes a single LDR image as input and recovers the lost information in its saturated regions to reconstruct an HDR image. To avoid trading off the reconstruction quality for efficiency, LiTMNet does not only adapt a lightweight encoder for efficient feature extraction, but also contains newly designed upsampling blocks in the decoder to alleviate artifacts and further accelerate the reconstruction. The final HDR image is produced by nonlinearly blending the network prediction and the original LDR image. Qualitative and quantitative comparisons demonstrate that LiTMNet produces HDR images of high quality comparable with the current state of the art and is 38× faster as tested on a mobile device. Please refer to the supplementary video for additional visual results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call