Abstract
A novel inverse tone mapping network, called “iTMNet”, is proposed in this paper. For training iTM-Net, we also propose a novel loss function considering the pixel distribution of HDR images. In inverse tone mapping with CNNs, we first point out that training CNNs with a standard loss function causes a problem, due to the distribution of HDR images. To overcome the problem, the novel loss function non-linearly tone-maps target HDR images into LDR ones, on the basis of a tone mapping operator, and then the distance between the tone-mapped image and a predicted one is calculated. The proposed loss function enables us not only to normalize HDR images but also to distribute pixel values of HDR images, like LDR ones. Experimental results show that HDR images predicted by the proposed iTM-Net have higher-quality than HDR ones predicted by conventional inverse tone mapping methods including state-of the-arts, in terms of both HDR-VDP-2.2 and PU encoding + MSSSIM. In addition, compared with loss functions not considering the HDR pixel distribution, the proposed loss function is shown to improve the performance of CNNs.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.