Abstract

Nighttime environments with sub-optimal lighting conditions significantly degrade the quality of captured images. Even though many notable state-of-the-art methods had been proposed to enhance low-light images, many of the enhanced outcomes exhibit color distortion, and uneven light adjustment problems. To remedy these issues, we propose an effective supervised network, Low-light Advanced U-Net (LAU-Net), which restructures the regular U-Net to offer a better network for low-light image enhancement. Specifically, we merged several efficacious components into our LAU-Net, namely the Parallel Attention Unit (PAU), the Internal Resizing Module (IRM), and external convolutional layers. The PAU places two attention modules in parallel to extract features along the convolutional streams. Meanwhile, the IRM comprises resizing components to optimize the information flow from encoder blocks to decoder blocks, whereas the external convolutional layers simulate the autoencoder to suppress noises. We employed the LOL dataset, which is composed of 500 paired images, to train, validate, and test the proposed network. Rigorous experiments showed that our model delivered remarkable performance both in qualitative and quantitative assessments and outperforms state-of-the-art approaches. Moreover, ablation studies also justified the necessity of each module in our proposed design. Lastly, we demonstrated that the proposed method could serve as an excellent pre-processing tool for image classification tasks in challenging nighttime environments, as it has successfully improved the object classification accuracy of a ResNet-50 model when applied onto low-light images from the ExDark dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call