Abstract

AbstractHigh dynamic range (HDR) imaging provides the capability of handling real world lighting as opposed to the traditional low dynamic range (LDR) which struggles to accurately represent images with higher dynamic range. However, most imaging content is still available only in LDR. This paper presents a method for generating HDR content from LDR content based on deep Convolutional Neural Networks (CNNs) termed ExpandNet. ExpandNet accepts LDR images as input and generates images with an expanded range in an end‐to‐end fashion. The model attempts to reconstruct missing information that was lost from the original signal due to quantization, clipping, tone mapping or gamma correction. The added information is reconstructed from learned features, as the network is trained in a supervised fashion using a dataset of HDR images. The approach is fully automatic and data driven; it does not require any heuristics or human expertise. ExpandNet uses a multiscale architecture which avoids the use of upsampling layers to improve image quality. The method performs well compared to expansion/inverse tone mapping operators quantitatively on multiple metrics, even for badly exposed inputs.

Highlights

  • High dynamic range (HDR) imaging provides the capability to capture, manipulate and display real-world lighting, unlike traditional, low dynamic range (LDR) imaging

  • This paper has introduced a method of expanding single exposure LDR content to HDR via the use of Convolutional Neural Networks (CNNs)

  • The novel three branch architecture provides a dedicated solution for this type of problem as each of the branches account for different aspects of the expansion

Read more

Summary

Introduction

High dynamic range (HDR) imaging provides the capability to capture, manipulate and display real-world lighting, unlike traditional, low dynamic range (LDR) imaging. A number of methods which can retarget LDR to HDR content have been presented [BADC17] These methods make it possible to utilise and manipulate the vast amounts of LDR content within HDR pipelines and visualise them on HDR displays. Such methods are primarily model-driven, use various parameters which make them difficult to use by non-experts, and are not suitable for all types of content. CNNs are the current de-facto approach used for many imaging tasks, due to their c 2018 The Author(s) Computer Graphics Forum c 2018 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd. high learning capacity as well as their architectural qualities which make them highly suitable for image processing [Sch14]. Results demonstrate an improvement in quality over all other previous approaches that were tested, including some other CNN architectures

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.