Abstract

A real-time 3-D facial animation system produces animation for appearance and internal articulators. For appearance, an anatomical model, including the skeleton, muscle, and skin, is built based on anatomical characteristics and a data-driven model is obtained by learning the mapping between texture and depth. Then, the two models are combined to produce animations with various strengths, since the anatomical model can control the animation strength directly and the data-driven model can capture the nuances of facial motion. For internal articulators, tongue tissue arrangements are obtained from medical data. Then, a nonlinear, quasi-incompressible, isotropic, hyperelastic biomechanical model is applied to describe tongue tissues and an anisotropic biomechanical model is applied to reflect the active and passive mechanical behavior of tongue muscle fibers. The tongue animation is simulated using the finite-element method for realism, while the collisions between the tongue and other articulators are simulated with a mass-spring model for efficiency. Experiments show that the system achieves high perceptual evaluation scores and quantitative improvements are demonstrated in the objective evaluation and user studies, compared with the outputs of other systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call