Automated human face recognition has numerous applications in areas of security and surveillance, database retrieval, and human-computer interaction. Over two decades of research have resulted in successful techniques for recognizing color/intensity 2D frontal facial images.1 However, performance declines severely with variations in pose, expression, and ambient illumination.2 Achieving robust and accurate automatic face recognition remains an important and open problem. Recently, researchers have proposed the use of 3D models.3 In addition to providing explicit information about the shape of the face, thesemodels can easily correct for pose by rigid rotation in 3D space, and are scale and illumination invariant. However, successful 3D facial recognition techniques2 based on rigid surface matching suffer from an exceptionally high computational cost that makes them unsuitable for real-time operation. Existing 3D techniques are also unable to handle changes in facial expression. We have attempted to resolve some of these issues. Features for face recognition are numerical quantities that vary considerably between individuals yet remain constant for different instances of the same individual. To date, 3D recognition algorithms that are based on local features have not embodied a sound understanding of discriminatory facial structural characteristics. As a starting point, we extensively investigated the existing literature on anthropometric facial proportions.5 We identified measurements reported to be highly diverse across different age and gender cohorts and ethnic groups. We employed these to design novel and effective algorithms.4 Specifically, we used 3D euclidean and along-the-surface geodesic distances between 25 manually located facial fiducial points (i.e., loci of interest, see Figure 1), associated with the highly variable anthropometric measurements, as features for face recognition.4 We studied geodesic distances because a recent study6 suggests that changes in expression may be modeled as isometric deformations of the face, under which geodesic Figure 1. (a) The facial fiducial points associated with discriminatory facial anthropometric measurements on a color image. (b) These points on a facial 3D depth map image (reproduced from Gupta et al.4).