Abstract

Low-light image enhancement is a challenging task in non-uniform low-light conditions, often resulting in local overexposure, noise amplification, and color distortion. To obtain satisfactory enhancement results, most models must resort to carefully selected paired or multi-exposure data sets. In this paper, we propose a self-supervised framework for non-uniform low-light image enhancement to address these issues, only requiring low-light images on their own for training. We first design a robust Retinex model-based image exposure enhancement network (EENet) to obtain global brightness enhancement and noise removal of images by carefully designing the loss function of each decomposition map. Then, to correct overexposed areas in the enhanced image, we incorporate the inverse image of the low-light image for enhancement using EENet. Furthermore, a three-branch asymmetric exposure fusion network (TAFNet) is designed. The two enhanced images and the original image are used as the TAFNet inputs to obtain a globally well-exposed and detail-rich image. Experimental results demonstrate that our framework outperforms some state-of-the-art methods in visual and quantitative comparisons.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call