Abstract
Low-light image enhancement is to restore the image acquired under insufficient light conditions to the normal exposure image. The low-light image enhancement method based on Retinex theory is a common method. The image is decomposed into the light component and the reflection component, and the corresponding enhancement is done respectively and then fused to achieve the purpose of image enhancement. However, most of the commonly used decomposition and enhancement networks in this field are designed by stacking convolution or up/down sampling, which lacks the guidance of relevant semantic information, resulting in the loss of many details in the decomposed and enhanced images. In order to alleviate the above problems, we propose a low-light image enhancement model based on Retinex theory and residual attention. Under the guidance of semantic information provided by channel domain and spatial domain, it can obtain smoother and less noisy images in the decomposition stage. In the image enhancement stage, the image texture and color can be restored with high quality. Moreover, we design loss functions that are more suitable for decomposition and enhancement tasks to constrain the learning tasks of different networks. In addition, we designed a residual block fused with dual attention unit, which can stably extract richer image features and suppress the generation of noise. Finally, we compare our model with the mainstream methods in recent years on public datasets. Extensive experimental results show that our model is superior to the mainstream methods, showing excellent performance and potential.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have