Abstract

Deep convolutional neural networks have recently been applied to improve the quality of low-light images and have achieved promising results. However, most existing methods cannot suppress noise during the enhancement process effectively, resulting in unknown artifacts and color distortions. In addition, these methods do not fully utilize illumination information and perform poorly under extremely low-light condition. To alleviate these problems, we propose the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">illumination guided attentive wavelet network</i> (IGAWN) for low-light image enhancement (LLIE). Considering that the wavelet transform can separate high-frequency noise and desired low-frequency content effectively, we enhance low-light images in the frequency domain. By integrating attention mechanisms with wavelet transform, we develop the attentive wavelet transform to capture more important wavelet features, which enables the desired content to be enhanced and the redundant noise to be suppressed. To improve the image enhancement performance under extremely low-light environment, we extract illumination information from the input images and exploit it as the guidance for image enhancement through the frequency feature transform (FFT) layer. The proposed FFT layer generates frequency-aware affine transformation from the estimated illumination information, which can adaptively modulate the image features of different frequencies. Extensive experiments on synthetic and real-world datasets demonstrate that our IGAWN performs favorably against state-of-the-art LLIE methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call