Abstract
We propose a single-shot high dynamic range (HDR) imaging algorithm with row-wise varying exposures in a single raw image based on a deep convolutional neural network (CNN). We first convert a raw Bayer input image into a radiance map by calibrating rows with different exposures, and then we design a new CNN model to restore missing information at the under- and over-exposed pixels and reconstruct color information from the raw radiance map. The proposed CNN model consists of three branch networks to obtain multiscale feature maps for an image. To effectively estimate the high-quality HDR images, we develop a robust loss function that considers the human visual system (HVS) model, color perception model, and multiscale contrast. Experimental results on both synthetic and captured real images demonstrate that the proposed algorithm can achieve synthesis results of significantly higher quality than conventional algorithms in terms of structure, color, and visual artifacts.
Highlights
Despite significant recent advances in digital imaging technology, conventional cameras can capture only a limited range of intensity levels perceptible by the human eye
We experimentally show, with both synthetic and real image datasets, that the proposed convolutional neural network (CNN) model trained with the robust loss function produces clear and natural-looking high dynamic range (HDR) images of significantly higher quality than the conventional algorithms [33], [37], [39]–[41]
EXPERIMENTAL RESULTS We evaluate the performance of the proposed algorithm against four conventional single-shot HDR imaging algorithms: Gu et al.’s algorithm [33] and Cho et al.’s algorithm [37] are interpolation-based algorithms, Choi et al.’s algorithm [39] is a sparse representation model-based algorithm and An and Lee’s algorithm [40] and Çoğalan and Akyüz’s algorithm [41] are learning-based algorithms
Summary
Despite significant recent advances in digital imaging technology, conventional cameras can capture only a limited range of intensity levels perceptible by the human eye. An and Lee’s algorithm [40] results in visible artifacts in the synthesized results, especially in highly textured regions To overcome these limitations, we develop a new and robust loss function for HDR imaging that considers the HVS model, color perception, and multiscale contrast.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.