Abstract

Faces show both global and local motions, where the former represents rigid head movements due to 3D translation and rotation and the local motion represents non-rigid deformation due to speech, or facial expressions. Although non- rigid face models can represent both types of the facial motions, they are not enough to track the facial motions correctly. The non-rigid face models have large number of model parameters to explain various deformation of the face and the high dimensionality of their model parameter space make them sensitive to initial model parameters, apt to be stuck to local minimum, and difficult to be recovered (re-initialized) from failure when iterative gradient descent optimization techniques are used. To alleviate these problems, we propose to use two types of face trackers that are suitable for estimating the global and local motions, respectively. In the proposed algorithm, the global motion estimator is applied at first and the estimated global motion is used to compute proper initial model parameters of the local motion estimator to make it converge correctly. In this paper, we used active appearance model (MM) and cylinder head model (CHM) as the representative examples of the non- rigid and rigid face models. Experimental results showed that face tracking combining AAMs and CHMs improved the face tracking performance than that of AAMs in terms of 170% higher tracking rate and the 115% wider pose coverage.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.