Abstract

In the face recognition problem, one of the most critical sources of variation is facial expression. This paper presents a system to overcome this issue by utilizing facial expression simulations on realistic and animatable face models that are in compliance with MPEG-4 specifications. In our system, firstly, 3D frontal face scans of the users in neutral expression and with closed mouth are taken for onetime enrollment. Those rigid face models are then converted into animatable models by warping a generic animatable model using Thin Plate Spline method. The warping is based on the facial feature points and both 2D color and 3D shape data are exploited for the automation of their extraction. The obtained models of the users can be animated by using a facial animation engine. This new attribution helps us to bring our whole database in the same "expression state" detected in a test image for better recognition results, since the disadvantage of expression variations is eliminated.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call