Abstract

Images captured in a dark environment may suffer from low visibility, which may degrade the visual aesthetics of images and the performance of vision-based systems. Extensive studies have focused on the low-light image enhancement (LIE) problem. However, we observe that even though the state-of-the-art LIE methods may oversharpen the low-light image and introduce visual artifacts. To reduce the overshoot effects of LIE, this paper proposes an illumination-aware image quality assessment, called LIE-IQA, for the enhanced low-light images. Since directly using the IQA of degraded image may fail to perform well, the proposed LIE-IQA is an illumination-aware and learnable metric. At first, the reflectance and shading components of both the enhanced low-light image and reference image are extracted by intrinsic image decomposition. Then, we use the weighted similarity between the VGG-based feature of enhanced low-light image and reference image to obtain LIE-IQA, where the weight of the measurement can be learned from pairs of data on benchmark dataset. Qualitative and quantitative experiments illustrate the superiority of the LIE-IQA to measure the image quality of LIE on different datasets, including a new IQA dataset built for LIE. We also use the LIE-IQA as a regularization of a loss function to optimize an end-to-end LIE method, and the results indicate the potential of the optimization framework with LIE-IQA to reduce the overshoot effects of low-light enhancement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call