Abstract

Low-light image enhancement has been gradually becoming a hot research topic in recent years due to its wide usage as an important pre-processing step in computer vision tasks. Although numerous methods have achieved promising results, some of them still generate results with detail loss and local distortion. In this paper, we propose an improved generative adversarial network based on contextual information. Specifically, residual dense blocks are adopted in the generator to promote hierarchical feature interaction across multiple layers and enhance features at multiple depths in the network. Then, an attention module integrating multi-scale contextual information is introduced to refine and highlight discriminative features. A hybrid loss function containing perceptual and color component is utilized in the training phase to ensure the overall visual quality. Qualitative and quantitative experimental results on several benchmark datasets demonstrate that our model achieves relatively good results and has good generalization capacity compared to other state-of-the-art low-light enhancement algorithms.

Highlights

  • Nowadays, in a world where multimedia equipment is much more accessible, images and videos have become the most ubiquitous ways to convey and record information

  • We propose an improved generative adversarial network (GAN) for low-light image enhancement

  • Constructed an unsupervised GAN framework to perform low-light image enhancement without paired training samples. They used self-regularized attention mechanism and double discriminators, i.e., local and global discriminators, to handle the unevenly distributed lighting in the input image. All these works proved that GAN has great potential in low level image processing, and encoder–decoder structure in the generator plays a pivotal role for feature representation

Read more

Summary

Introduction

In a world where multimedia equipment is much more accessible, images and videos have become the most ubiquitous ways to convey and record information. Some of them [1,2,3,4] adopted global histogram of input image to estimate the pixel transformation function, but they ignored the unevenly distributed darkness and might introduce over-exposure distortion in some areas of the enhanced result To alleviate this problem, local histogram-based methods were proposed [5,6]. Jiang et al proposed EnlightenGAN [21] for single image low-light enhancement without paired training data Different from the above methods, Guo et al [25] formulated image enhancement as a task of image-specific curve estimation This method does not require any paired or unpaired training data and directly estimates pixel-wise curve parameters to adjust the input brightness.

Generative Adversarial Network
Attention Mechanism
Dilated Convolution
Overall Network Architecture
Multi-Scale Context Attention Module
Loss Function
Dataset Description and Evaluation Metrics
Implementation Detail
Comparison with State-of-the-Art Methods
Methods
Ablation Analysis
User Study
Computational Complexity
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call