Abstract

Visual quality of images captured by mobile devices is often inferior to that of images captured by a Digital Single Lens Reflex (DSLR) camera. This paper presents a novel generative adversarial network-based mobile image enhancement method, referred to as MIEGAN. It consists of a novel multi-module cascade generative network and a novel adaptive multi-scale discriminative network. The multi-module cascade generative network is built upon a two-stream encoder, a feature transformer, and a decoder. In the two-stream encoder, a luminance-regularizing stream is proposed to help the network focus on low-light areas. In the feature transformation module, two networks effectively capture both global and local information of an image. To further assist the generative network to generate the high visual quality images, a multi-scale discriminator is used instead of a regular single discriminator to distinguish whether an image is fake or real globally and locally. To balance the global and local discriminators, an adaptive weight allocation is proposed. In addition, a contrast loss is proposed, and a new mixed loss function is developed to improve the visual quality of the enhanced images. Extensive experiments on the popular DSLR photo enhancement dataset and MIT-FiveK dataset have verified the effectiveness of the proposed MIEGAN.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.