Abstract
The captured images suffer from low sharpness, low contrast, and unwanted noise when the imaging device is used in conditions such as backlighting, overexposure, or darkness. Learning-based low-light enhancement methods have robust feature learning and mapping capabilities. Therefore, we propose a learning-based joint contrast enhancement and noise suppression method for low-light images (termed JCENS). JCENS is mainly composed of three subnetworks: the low-light image denoising network (LDNet), the attention feature extraction network (AENet), and the low-light image enhancement network (LENet). In particular, LDNet produces a low-light image devoid of unwanted noise. AENet mitigates the impact of LDNet’s denoising process on local details and generates attention enhancement features. Ultimately, LENet combine the outputs of LDNet and AENet to produce a noise-free and normal-light-enhanced image. The proposed joint contrast enhancement and noise suppression network is capable of achieving a balance between brightness enhancement and noise suppression, thereby better preserving salient image details. Both synthetic and realistic experiments have demonstrated the superior performance of our JCENS in terms of quantitative evaluations and visual image qualities.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.