Abstract

Low-light image enhancement (LLIE) is a common pretext task for computer vision, which aims to adjust the luminance of the low-light image to obtain the normal-light image. At present, unsupervised LLIE has been developed. However, its performance is limited due to the lack of sufficient semantic information and guidance from a strict discriminator. In this work, a semantic-aware generative adversarial network is proposed to alleviate the above limitations. We use the pre-trained VGG model on ImageNet to extract the prior semantic information, which is organically fed into the generator to refine its feature representation, and develop an adaptive image fusion strategy working on the output layer of the generator. Further, to improve the discriminator’s capacity of supervising generator, we design the dual-discriminator with dense connection and two image quality-driven priority queues with time-aware. The quantitative and qualitative experiments on four testing datasets demonstrate the competitiveness of the proposed model and the effectiveness of each component. Our code is available at: https://github.com/Shecyy/SAGAN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call