Abstract

This paper proposes a dense network composed of an improved Transformer network, which successfully restores low-light images to high-quality normal-light images, alleviating issues such as low brightness, high noise, and missing critical information in low-light images. The entire network architecture is based on the improved Transformer network and builds a dense network with a combination of long and short connections. While retaining the self-attention mechanism of the Transformer network, it achieves multi-level fusion and utilization of shallow and deep features, providing the network with rich image features and enabling the restoration of low-light images to high-quality normal-light images. Additionally, a spatial-domain and frequency-domain combined loss function is designed, considering both pixel-level and frequency domain losses, effectively constraining the image restoration process and avoiding spectral biases. Lastly, a multi-scale hybrid gate feedforward network is designed to replace the traditional feedforward network in the Transformer, facilitating feature selection and forward propagation. These designs effectively enhance the richness of meaningful image features, alleviate spectral biases, and improve the visual quality of low-light images. Experimental results demonstrate the superiority of our method over state-of-the-art networks on various typical image enhancement datasets. Taking the most commonly used low-light dataset LOLv1 as an example, our method achieves improvements of 1.3% and 3.07% in PSNR and SSIM, respectively, compared to the best-performing network, showing favorable qualitative and quantitative evaluation results. The proposed method effectively addresses the issue of insufficiently realistic results in low-light image restoration, providing a reliable reference for practical applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call