Abstract

This paper considers the problem of face frontalization in the wild, which transforms a face image with profile views into a frontal face. Face frontalization provides an effective solution to the face recognition problem in uncontrolled scenes. However, the existing methods either focus on deep learning techniques as an end-to-end framework or combine other explicit facial prior estimation tasks, such as 3D representation, optical flow estimation and so on, where computation is highly redundant and facial identity cannot be well represented. In this paper, we focus on how to maximise the potential of the model for identity learning and representation, and propose an accurate and lightweight face frontalization approach, named identity-preserving model (IPM). IPM has a well-designed encoder-decoder architecture which restores input face to a frontal counterpart. The encoder is constructed to extract representation from the input face, where a contrastive loss function is applied that encourages representations to form compact clusters, while preserving their relationships across the corpora. Then a cross-domain rectification module is proposed to eliminate the representation differences between the recognition and reconstruction domains, thus improving the accuracy of the reconstructed face. Extensive experiments on benchmark datasets show that the proposed IPM approach not only outperforms the state-of-the-art on public datasets but also can cope with images in the uncontrolled scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call