Abstract

The images captured under low-light conditions are characterized by low brightness and poor contrast, which affects the accuracy of computer vision tasks. In recent years, there have been a variety of low-light image enhancement (LLIE) models based on deep learning, but they have not been able to fully extract the multiscale information of multiple stages, resulting in poor generalization performance and instability of the model. Currently, a large number of multistage networks cause color distortion and stylization of images due to excessive transmission of noncritical information. To address these issues, we propose an LLIE via multistage feature fusion network. Our network consists of three stages. In the first two stages of the LLIE, S-UNet, which combines UNet and spatial weighted residual channel attention block (SWRCAB), helps the network extract more critical multiscale information and occupy a small amount of computing resources. In the third stage, we fuse the SWRCAB and a nonlocal sparse block into the original enhancement network to enhance the original resolution pixel-by-pixel. We also propose a fusion attention mechanism, which can provide real and effective supervision and control the transmission of a small amount of critical feature information for each stage. In addition, we add illumination guidance for image segmentation at the beginning of each stage of the network, excepting that the model can better focus on the dark part of the low-light image and avoid overexposure. We conduct experiments on multiple benchmark datasets to qualitatively and quantitatively demonstrate that the proposed method is more competitive than the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call