Abstract

Low-light image enhancement is a fundamental low-level computer vision task that has a wide range of applications, such as night monitoring, all-weather automatic driving, back-light imaging, and so on. In this paper, we propose a novel four-stage detachable framework for low-light image enhancement, where each module can be trained and fine-tuned separately. More importantly, the proposed framework can effectively be adapted to unseen datasets. In the first stage, we propose an image decomposition network that can extract a pair of a reflectance map and an illumination map given an image with arbitrary lighting conditions. The proposed loss function ensures that images of the same scene with different lighting conditions share the same reflectance map. Instead of using gamma correction, we establish an automatic searching scheme in the second stage to find an explicit mapping relationship between illumination maps in different lighting conditions. In the third stage, we use an efficient and effective unsupervised training method to find the best parameter set for the mapping. Finally, a normal-light image is obtained by composing together its reflectance map and transformed illumination map. A series of experiments are conducted on several popular datasets, and the results demonstrate the superiority of our proposed method on unseen datasets. Our framework surpasses other state-of-the-art methods and is applicable to enhance images in a wide range of lighting conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call