Abstract

Enhancing low-light images improves both the visibility and quality of the images. Existing methods primarily focus on the enhancement process and heavily rely on the supervised learning strategy, where low/normal-light image pairs are used as the training dataset. In this paper, we propose a novel method called Learn Enhancement by Learning Degradation (LELD) to achieve efficient light adjustment and scene fidelity. We use a carefully designed degradation network (DNet) to guide the enhancement network (ENet). Specifically, the role of DNet is transforming normal-light images into low-light images. For better generalization ability, we employ an unsupervised learning strategy and a generative adversarial network framework. The training is totally dependent on unpaired datasets. Inspired by Retinex theory, we propose a fidelity loss to maintain color and detail during the degradation process. The ENet exhibits a straightforward architecture and achieves efficient enhancement. Experimental results demonstrate the advantages of our method over state-of-the-art methods in terms of visual quality and enhancement efficiency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call