Abstract

Face inpainting is a challenging task aiming to fill the damaged or masked regions in face images with plausibly synthesized contents. Based on the given information, the reconstructed regions should look realistic and more importantly preserve the demographic and biometric properties of the individual. The aim of this paper is to reconstruct the face based on the periocular region (eyes-to-face). To do this, we proposed a novel GAN-based deep learning model called Eyes-to-Face GAN (E2F-GAN) which includes two main modules: a coarse module and a refinement module. The coarse module along with an edge predictor module attempts to extract all required features from a periocular region and to generate a coarse output which will be refined by a refinement module. Additionally, a dataset of eyes-to-face synthesis has been generated based on the public face dataset called CelebA-HQ for training and testing. Thus, we perform both qualitative and quantitative evaluations on the generated dataset. Experimental results demonstrate that our method outperforms previous learning-based face inpainting methods and generates realistic and semantically plausible images. We also provide the implementation of the proposed approach to support reproducible research via (<uri>https://github.com/amiretefaghi/E2F-GAN</uri>).

Highlights

  • Image inpainting is used to complete missing information or substituting undesired regions of pictures with conceivable and fine-grained content

  • A generative adversarial networks (GANs)-based refinement module which consists of a refinement generator (F) and a discriminator (D&) has been utilized to improve the coarse outputs

  • To add more details to the I(0, we propose a GAN-based refinement module

Read more

Summary

Introduction

Image inpainting is used to complete missing information or substituting undesired regions of pictures with conceivable and fine-grained content. It encompasses a wide extend of applications in fields of restoring harmed photos, editing pictures, removing objects, etc [1][2]. Many conventional methods typically use low-level and hand-crafted features from the corrupted input image and utilize the priors or additional data. In recent years, learning-based strategies have been proposed to overcome these confinements by utilizing huge volumes of training data [3][4]. Despite of great achievements of learning-based methods in this task, they are limited by at least three challenges: the inpainted area should be C1)

Objectives
Methods
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.