Abstract

Restoring image quality in low-light environments is an intriguing topic. While deep learning models have made significant strides in low-light enhancement, most models do not take into account the inherent characteristics of objects themselves. In this paper, we use the characteristics of the image itself to construct a Signal-to-Noise Ratio (SNR) map that guides the signal space variation to dynamically stretch the pixel values. Specifically, we propose a novel signal-to-noise ratio image-guided enhancement framework that uses the feature information of the original image to guide spatial variations in the image. It involves step-wise guidance for image feature fusion, gradually emphasizing high-frequency feature information within the image. Meanwhile, we introduced a texture optimization module that utilizes the feature information extracted by the feature fusion module to address the issues of overexposure and detail loss. We performed qualitative and quantitative evaluations on synthetic and real low-light image datasets to demonstrate the performance of our method. The experimental results show that our model outperforms other state-of-the-art methods (SOTA) in robust low-light enhancement, especially in processing images captured in complex scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call