Abstract

Precise facial feature extraction is essential to the high-level face recognition and expression analysis. This paper presents a novel method for the real-time geometric facial feature extraction from live video. In this paper, the input image is viewed as a weighted graph. The segmentation of the pixels corresponding to the edges of facial components of the mouth, eyes, brows, and nose is implemented by means of random walks on the weighted graph. The graph has an 8-connected lattice structure and the weight value associated with each edge reflects the likelihood that a random walker will cross that edge. The random walks simulate an anisotropic diffusion process that filters out the noise while preserving the facial expression pixels. The seeds for the segmentation are obtained from a color and motion detector. The segmented facial pixels are represented with linked lists in the original geometric form and grouped into different parts corresponding to facial components. For the convenience of implementing high-level vision, the geometric description of facial component pixels is further decomposed into shape and registration information. Shape is defined as the geometric information that is invariant under the registration transformation, such as translation, rotation, and isotropic scale. Statistical shape analysis is carried out to capture global facial features where the Procrustes shape distance measure is adopted. A Bayesian approach is used to incorporate high-level prior knowledge of face structure. Experimental results show that the proposed method is capable of real-time extraction of precise geometric facial features from live video. The feature extraction is robust against the illumination changes, scale variation, head rotations, and hand interference.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call