Abstract

AbstractTo solve the problems of the low-light images, such as low brightness, lack of saturation, and insufficient details. A low-light image enhancement method combining U-Net and Self-attention mechanism is proposed. The VGG network is added to the generator to construct the content perception loss to make sure that the enhanced image will not lose too many original image features. The color loss is introduced to enrich the color information of the enhanced image, which is combined with the counter loss in a weighted form to optimize the low-light image enhancement method. The self-attention module is introduced into the U-Net framework to make sure that the enhanced image has rich details features. This method is tested in the LOL, DICM, and ExDark data sets. The results show that compared with the existing mainstream methods, this method not only improves the overall brightness and sharpness of low-light images, also enriches the details features. The image enhancement effect is obvious, and the PSNR, SSIM, and NIQE evaluation indicators are significantly improved.KeywordsGenerating adversarial networkU-NetSelf-attention mechanismLow-light image enhancementMultiscale discriminator

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call