Abstract

One of the most critical sources of variation in face recognition is facial expressions, especially in the frequent case where only a single sample per person is available for enrollment. Methods that improve the accuracy in the presence of such variations are still required for a reliable authentication system. In this paper, we address this problem with an analysis-by-synthesis-based scheme, in which a number of synthetic face images with different expressions are produced. For this purpose, an animatable 3D model is generated for each user based on 17 automatically located landmark points. The contribution of these additional images in terms of the recognition performance is evaluated with three different techniques (principal component analysis, linear discriminant analysis, and local binary patterns) on face recognition grand challenge and Bosphorus 3D face databases. Significant improvements are achieved in face recognition accuracies, for each database and algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call