Abstract
Automated image enhancement algorithms have a profound impact on human life today. To solve the problems of luminance, lack of detail information, and overall color tone bias of images taken by mobile devices, a novel framework of core-attributes enhanced generative adversarial network (CAE-GAN) is designed to improve these core attributes of enhanced images. The generator in CAE-GAN mainly consists of a luminance correction encoder (LCE) and a high-frequency supplementary decoder (HFSD). To target the adaptive luminance improvement for each location, the encoder based on LCE is designed by combining the extracted prior knowledge of luminance. Meanwhile, a decoder based on HFSD is proposed to fill in missing edge details during the image reconstruction process. In addition, a multi-scale statistical characteristics distinction branch (MSCDB) is proposed to correct the overall tone. Moreover, an upgrade adversarial loss function is designed to focus on the discrimination of both multi-scale and multi-perspective. The generator and discriminator are iteratively trained under the constraints of the total loss function, resulting in the generator that automatically improves the visualization of the images. Extensive experiments have shown that our CAE-GAN is capable of achieving excellent results in several evaluation metrics and subjective results. The source code of the proposed CAE-GAN is available at https://github.com/SWU-CS-MediaLab/CAE-GAN.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Engineering Applications of Artificial Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.