Abstract

A large number of images taken under conditions of insufficient lighting, often with overall darkness, low contrast, noise and inconspicuous details, thus affecting people’s observation of the image content and the subsequent use of the image. This paper presents a novel method, Attention U-Net Dual Discriminator-Generative Adversarial Networks (AUDD-GAN), which combines attention mechanism, U-Net network structure and the dual discriminator. First, the method generates a normalized attention map to guide the subsequent modules to perform illumination enhancement to prevent overexposure or underexposure problems; second, the U-Net generative network extracts characteristics of diverse levels to acquire richer detail information; then, a dual discriminator is used as the discriminative network, where PatchGAN is used as the local discriminator to achieve better enhancement effects; ultimately, we combine multiple loss functions as the joint loss function. AUDD-GAN is verified to enhance the global contrast of the image while enhancing the detail features more effectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call