Abstract

Images taken in low-light situations frequently have a significant quality reduction. Taking care of these degradation problems in low-light images is essential for raising their visual quality and enhancing high-level visual task performance. However, because of the inherent information loss in dark images, conventional Retinex-based approaches for low-light image enhancement frequently fail to accomplish real denoising. This research introduces DEGANet, a revolutionary deep-learning framework created particularly for improving and denoising low-light images. To overcome these restrictions, DEGANet makes use of the strength of a Generative Adversarial Network (GAN). The Decom-Net, Enhance-Net, and an Adversarial Generative Network (GAN) are three linked subnets that make up our novel Retinex-based DEGANet architecture. The Decom-Net is in charge of separating the reflectance and illumination components from the input low-light image. This decomposition enables Enhance-Net to effectively enhance the illumination component, thereby improving the overall image quality. Due to the complicated noise patterns, fluctuating intensities, and intrinsic information loss in low-light images, denoising them presents a significant challenge. By incorporating a GAN into our architecture, DEGANet is able to effectively denoise and smooth the enhanced image as well as retrieve the original data and fill in the gaps, producing an output that is aesthetically beautiful while maintaining key features. Through a comprehensive set of studies, we demonstrate that DEGANet exceeds current state-of-the-art methods in both terms of image enhancement and denoising quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call