Abstract

Face frontalization is a critical and difficult task on face pose reconstruction. Previous researches use simple posture information as guidance, such as pose coding and facial landmarks. To explore the guidance effect of profile faces, we propose detailed features that provide much detailed information. In this paper, a D etailed Feature G uided Generative Adversarial P ose R econstruction Network (DGPR) is proposed. Firstly, frontal pose coding and profile detailed features are fed into DGPR to generate detailed features of front face. Then, the second generator combines frontal detailed features and profile face to reconstruct front face. Besides, we propose a conditional enhancement loss to strengthen the guiding role of detailed features, and a smoothing loss to reduce edge sharpness in generated faces. Experimental results show that our method generates photorealistic front faces and outperforms state-of-the-art methods on M2FPA and CAS-PEAL. Specifically, DGPR improves the face recognition accuracy under pose angles of ±60°, ±75°, ±90° by 2%, 1%, and 6% respectively over the state-of-the-art methods on M2FPA, achieves the average rank-1 recognition rate to 99.95% and improves it by 0.05% on CAS-PEAL. These results demonstrate the effects of detailed features and corresponding modules.

Highlights

  • Profile faces exist widely in the real world for the application field of face recognition

  • Benefiting from convolutional neural networks, such as Generative Adversarial Networks (GAN) [1], Variational Autoencoder (VAE) [2], great progress has been achieved in the field of face frontalization

  • (2) We proposed a structure of dual generators to combine profile faces and detailed features, which achieves excellent results

Read more

Summary

INTRODUCTION

Profile faces exist widely in the real world for the application field of face recognition. After encoding faces as high-level features in latent spaces, multiple sub-modules process features in different aspects to generate photorealistic frontal faces This type of method focuses on high-level features, neglects detailed information, such as edges, textures and corners which contain rich information useful for the quality improvement of the frontalized faces. CAPG-GAN [7] utilizes five facial landmarks (eyes, nose, mouth) which cannot store rich pose features on the other areas of the face skin These methods used the common loss functions which had not considered the importance of the detail smoothness, suffered from rough and unrealistic details in their results. Chen: Detailed Feature Guided Generative Adversarial Pose Reconstruction Network Based on these observations, we propose to explore more detailed information from profile faces, aiming to provide more effective information for face frontalization.

RELATED WORKS
NETWORK STRUCTURE
LOSS FUNCTIONS
EXPERIMENTS
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call