Abstract

One of the most critical sources of variation in face recognition is facial expressions, especially in the frequent case where only a single sample per person is available for enrollment. Methods that improve the accuracy in the presence of such variations are still required for a reliable authentication system. In this paper, we address this problem with an analysis-by-synthesis-based scheme, in which a number of synthetic face images with different expressions are produced. For this purpose, an animatable 3D model is generated for each user based on 17 automatically located landmark points. The contribution of these additional images in terms of the recognition performance is evaluated with three different techniques (principal component analysis, linear discriminant analysis, and local binary patterns) on face recognition grand challenge and Bosphorus 3D face databases. Significant improvements are achieved in face recognition accuracies, for each database and algorithm.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.