Abstract

Low-light image enhancement aims to recover normal-light images from the images captured under very dim environments. Existing methods cannot well handle the noise, color bias and over-exposure problem, and fail to ensure visual quality when lacking paired training data. To address these problems, we propose a novel unsupervised low-light image enhancement network named LE-GAN, which is based on generative adversarial networks and is trained with unpaired low/normal-light images. Specifically, we design an illumination-aware attention module that enhances the feature extraction of the network to address the problems of noise and color bias, as well as improve the visual quality. We further propose a novel identity invariant loss to address the over-exposure problem to make the network learn to enhance low-light images adaptively. Extensive experiments show that the proposed method can achieve promising results. Furthermore, we collect a large-scale low-light dataset named Paired Normal/Lowlight Images (PNLI). It consists of 2,000 pairs of low/normal-light images captured in various real-world scenes, which can provide the research community with a high-quality dataset to advance the development of this field.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.