Abstract
Face frontalization is the process of converting a face image under arbitrary pose to an image with frontal pose. Benefited from significant improvement of generative adversarial networks (GAN), generative models can use face frontalization to overcome the problem of model degradation owing to the variation of head pose in face recognition. Existing GAN based models can generate a synthesis face image with the same identity as the input, while those models are hard to capture the geometry structure or facial patterns via pixel-wise constraint, e.g. face contour. In this paper, we propose a Geometry Structure Preserving based GAN, i.e. GSP-GAN, for multi-pose face frontalization and recognition. The generator of our model takes the form of a typical auto-encoder, where the encoder extracts identity feature and the decoder synthesizes the corresponding frontal face image. In this process, the perception loss constrains the generator to synthesize a face image with the same identity as the input image. Meanwhile, we adopt real frontal face images as extra input data during training process, where a L1 norm loss is utilized to construct a pixel-wise mapping from arbitrary pose image to frontal image. More importantly, for discriminator of our model, we use the self-attention block to preserve the geometry structure of a face. The discriminator consists of a series of parallel sub-discriminators that carry the global and local attention information. Compared with the state-of-the-art models on datasets of Multi-PIE, LFW and CFP, the proposed GSP-GAN can generate high-quality frontal images under arbitrary pose, and get satisfactory recognition performance.
Highlights
Face recognition methods [1]–[5] has made considerable progress over the past decade
The contribution of this paper is listed as follows: 1) We proposed a geometry structure preserving based generative adversarial networks (GAN) (GSP-GAN) for multi-pose face frontalization, which is 3D free face model and does not require the prior knowledge of the type of pose
We argue that the self-attention mechanism can make the generation model learn structure information of face images, improving the quality of synthesis face images
Summary
Face recognition methods [1]–[5] has made considerable progress over the past decade. 3) To the best of our knowledge, we are among the first applying non-local means based self-attention for face frontalization, and the proposed GSP-GAN is able to generate high quality face image and further improve the recognition performance. Zhang et al proposed SAGAN [19], and introduced the Non-local means [20] based self-attention into conventional GAN model to establish the extra long-range dependency besides pixel-wise dependency in generation tasks, Inspired by this, a Geometry Structure Preserving based GAN(GSP-GAN) model for multi-pose face frontalization and recognition is proposed to captures the longrange dependency of face geometry structure this paper. In this work, we add the self-attention block into the discriminator and divide the input images into global patch and local patches This operation enables the GAN model to preserve geometry structure and synthesize more details of human face. We will discuss about the details of implementation process
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.