Abstract
Low-light image enhancement has been an important research branch in the field of computer vision. Low-light images are characterized by poor visibility, high noise and low contrast. To improve low-light images generated in low-light environments and night conditions, we propose an Attention-Guided Multi-scale feature fusion network (MSFFNet) for low-light image enhancement for enhancing the contrast and brightness of low-light images. First, to avoid the high cost computation arising from the stacking of multiple sub-networks, our network uses a single encoder and decoder for multi-scale input and output images. Multi-scale input images can make up for the lack of pixel information and loss of feature map information caused by a single input image. The multi-scale output image can effectively monitor the error loss in the image reconstruction process. Second, the Convolutional Block Attention Module (CBAM) is introduced in the encoder part to effectively suppress the noise and color difference generated during feature extraction and further guide the network to refine the color features. Feature calibration module (FCM) is introduced in the decoder section to enhance the mapping expression between channels. Attention fusion module (AFM) is also added to capture contextual information, which is more conducive to recovering image detail information. Last, the cascade fusion module (CFM) is introduced to effectively combine the feature map information under different perceptual fields. Sufficient qualitative and quantitative experiments have been conducted on a variety of publicly available datasets, and the proposed MSFFNet outperforms other low-light enhancement methods in terms of visual effects and metric scores.
Highlights
Nowadays, more and more researchers are working to solve the problem of image degradation in poorly illuminated scenes
We propose an Attention-Guided Multi-scale feature fusion network for low light image enhancement based on the structure of a single encoder and decoder and the principle of coarse-to-fine network design
We propose an Attention-Guided Multi-scale feature fusion network (MSFFNet) for low-light image enhancement to solve the low-light image enhancement problem
Summary
More and more researchers are working to solve the problem of image degradation in poorly illuminated scenes. Low-light image enhancement methods aim to restore image sharpness and contrast, as well as detailed information in dark-light regions, which is a very challenging task. In low-light environments, due to the limitations of image acquisition equipment, the photographs taken often have low brightness, low contrast and severe noise phenomena. Low-light images affect the user’s visual perception, and seriously affect the processing of advanced computer vision tasks (target detection and recognition). The low-light enhanced image can provide good preconditions for the subsequent target detection (Lin et al, 2017; Shen et al, 2020; Wang et al, 2020b), image recognition (Shi et al, 2020; Zhao et al, 2020),image segmentation (Zhang et al, 2021), image classification (Liu et al, 2020b) and autonomous driving (Chen et al, 2016; Prakash et al, 2021) and other advanced vision tasks. Visual information processing is inseparable in fields such as military missions and deep-sea environments and Biomedical imaging (Ardizzone et al, 2006, 2008)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.