Abstract
Recently, generative adversarial networks (GANs) have demonstrated high-quality reconstruction in face completion. There is still much room for improvement over the conventional GAN models that do not explicitly address the texture details problem. In this paper, we propose a Laplacian-pyramid-based generative framework for face completion. This framework can produce more realistic results (1) by deriving precise content information of missing face regions in a coarse-to-fine fashion and (2) by propagating the high-frequency details from the surrounding area via a modified residual learning model. Specifically, for the missing regions, we design a Laplacian-pyramid-based convolutional network framework that can predict missing regions under different resolutions; this framework takes advantage of multiscale features shared from low levels and extracted from middle layers for the next finer level. For high-frequency details, we construct a new residual learning network to eliminate color discrepancies between the missing and surrounding regions progressively. Furthermore, a multiloss function is proposed to supervise the generative process. To optimize the model, we train the entire generative model with deep supervision using a joint reconstruction loss, which ensures that the generated image is as realistic as the original. Extensive experiments on benchmark datasets show that the proposed framework exhibits superior performance over state-of-the-art methods in terms of predictive accuracy, both quantitatively and qualitatively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.