Abstract

Low-light images are generally produced by shooting in a low light environment or a tricky shooting angle, which not only affect people's perception, but also leads to the bad performance of some artificial intelligence algorithms, such as object detection, super-resolution, and so on. There are two difficulties in the low-light enhancement algorithm: in the first place, applying image processing algorithms independently to each low-light image often leads to the color distortion; the second is the need to restore the texture of the extremely low-light area. To address these issues, we present two novel and general approaches: firstly, we propose a new loss function to constrain the ratio between the corresponding RGB pixel values on the low-light image and the high-light image; secondly, we propose a new framework named GLNet, which uses the dense residual connection block to obtain the deep features of the low-light images, and design a gray scale channel network branch to guide the texture restoration on the RGB channels by enhancing the grayscale image. The ablation experiments have demonstrated the effectiveness of the proposed module in this paper. Extensive quantitative and perceptual experiments show that our approach obtains state-of-the-art performance on the public dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.