Abstract
Enhancing low-light images improves both the visibility and quality of the images. Existing methods primarily focus on the enhancement process and heavily rely on the supervised learning strategy, where low/normal-light image pairs are used as the training dataset. In this paper, we propose a novel method called Learn Enhancement by Learning Degradation (LELD) to achieve efficient light adjustment and scene fidelity. We use a carefully designed degradation network (DNet) to guide the enhancement network (ENet). Specifically, the role of DNet is transforming normal-light images into low-light images. For better generalization ability, we employ an unsupervised learning strategy and a generative adversarial network framework. The training is totally dependent on unpaired datasets. Inspired by Retinex theory, we propose a fidelity loss to maintain color and detail during the degradation process. The ENet exhibits a straightforward architecture and achieves efficient enhancement. Experimental results demonstrate the advantages of our method over state-of-the-art methods in terms of visual quality and enhancement efficiency.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.