Abstract

Facial animation is useful in human-machine interaction, computer games and teleconferences. We propose a realtime performance-driven facial animation system for ordinary users. The system enables a user to animate an avatar by performing desired facial motions in front of a video camera. First, a constrained local model based approach is used to track facial features of a performer in the video. To increase the tracking accuracy, we propose an efficient method to build a user-specific local texture model. Next, a 3D blendshape face model is fitted to the tracked feature points. To improve the expressiveness of synthesized animations, facial expression recognition results and pre-recorded animation priors are incorporated into the fitting procedure. Finally, facial animations are created using blendshape interpolation. Experiments show that the synthetic facial motions are realistic and quite similar to the facial actions of the performer. By using an ordinary camera, our system provides the user complete control over the generated facial animations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call