Abstract

Backlit images are usually taken when the light source is opposite to the camera. The uneven exposure (e.g., underexposure on the foreground and overexposure on the background) makes the backlit images more challenging than general image enhancement tasks that only need to increase or decrease the exposure on the whole images. Compared to traditional approaches, Convolutional Neural Networks perform well in enhancing images due to the abilities of exploiting contextual features. However, the lack of large benchmark datasets and specially designed models impedes the development of backlit image enhancement. In this paper, we build the first large-scale BAcklit Image Dataset (BAID), which contains 3000 backlit images and the corresponding ground truth manually adjusted by trained photographers. It covers a broad range of categories under different backlit conditions in both indoor and outdoor scenes. Furthermore, we propose a saliency guided backlit image enhancement network, namely BacklitNet, for robust and natural restoration of backlit images. In particular, our model innovatively combines a nested U-structure with bilateral grids, which enables fully extracting multi-scale saliency information and rapidly enhancing arbitrary resolution images. Moreover, a carefully designed loss function based on prior knowledge of brightness distribution of backlit images is proposed to enforce the network to focus more on backlit regions during the training phase. We evaluate the proposed method on the BAID dataset and two public small-scale backlit image datasets. Experimental results demonstrate that our method performs favorably against the state-of-the-art approaches.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.