Abstract

Automated image enhancement algorithms have a profound impact on human life today. To solve the problems of luminance, lack of detail information, and overall color tone bias of images taken by mobile devices, a novel framework of core-attributes enhanced generative adversarial network (CAE-GAN) is designed to improve these core attributes of enhanced images. The generator in CAE-GAN mainly consists of a luminance correction encoder (LCE) and a high-frequency supplementary decoder (HFSD). To target the adaptive luminance improvement for each location, the encoder based on LCE is designed by combining the extracted prior knowledge of luminance. Meanwhile, a decoder based on HFSD is proposed to fill in missing edge details during the image reconstruction process. In addition, a multi-scale statistical characteristics distinction branch (MSCDB) is proposed to correct the overall tone. Moreover, an upgrade adversarial loss function is designed to focus on the discrimination of both multi-scale and multi-perspective. The generator and discriminator are iteratively trained under the constraints of the total loss function, resulting in the generator that automatically improves the visualization of the images. Extensive experiments have shown that our CAE-GAN is capable of achieving excellent results in several evaluation metrics and subjective results. The source code of the proposed CAE-GAN is available at https://github.com/SWU-CS-MediaLab/CAE-GAN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call