Abstract

High dynamic range (HDR) images capture the luminance information of the real world and have more detailed information than low dynamic range (LDR) images. In this paper, we propose a dual-streams global guided end-to-end learning method to reconstruct HDR image from a single LDR input that combines both global information and local image features. In our framework, global features and local features are separately learned in dual-streams branches. In the reconstructed phase, we use a fusion layer to fuse them so that the global features can guide the local features to better reconstruct the HDR image. Furthermore, we design mixed loss function including multi-scale pixel-wise loss, color similarity loss and gradient loss to jointly train our network. Comparative experiments are carried out with other state-of-the-art methods and our method achieves superior performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.