Abstract

3D surface data which yields significant improvements in face recognition against pose and illumination variations, is insufficient by itself in the presence of facial expressions. The proposed methods in this scope, generally choose to eliminate or avoid the adverse effect of the facial expressions by analyzing only the expression-proof regions or weighing facial regions according to their robustness. In order to overcome facial expressions issue for face recognition, the facial expressions for each object can be proposed to be obtained and learned. In this paper, we present an approach to obtain dynamic models on which the facial expressions can be animated from the static ones by utilizing the TPS (Thin Plate Spline) method. Initially, 3D frontal and neutral face models of the subjects and some of the MPEG-4 specified feature points on that models are assumed to be available. Afterwards, an animatable generic model is warped by using those models and points and hence animatable models for the subjects are obtained.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call