Abstract

Facial modeling and animation are important research topics in computer graphics. During the last 20 years, a lot of research has been done in these areas, but it still remains a challenging task. The impact of previous and ongoing research has been felt in many applications, like games, Web-based 3D animations, 3D animation movies, etc. Two directions are investigated: precalculating animation with very realistic results for animated films and real-time animation for interactive applications. Correspondingly, the animation techniques vary from key-frame animations, where animators set each frame, to algorithmic parameterized mesh deformation. Many of the proposed deformation models use a parameterization scheme, which helps control the animation. Computer graphics have evolved to a relatively mature state. In parallel to the evolution of 3D graphics technologies, user and application requirements have also dramatically increased from simple virtual worlds to highly complex, interactive, and detailed virtual environments. Additionally, the targeted display platforms have widely broadened from dedicated graphics workstations or clusters of machines to standard desktop PCs, laptops, and mobile devices such as personal digital assistants (PDAs) or even mobile phones. Facial animation can be one illustration of such closely related evolutions of graphics techniques and corresponding applications and user’s requirements. Actually, despite much research and work on modeling, animation, and rendering techniques, it is still an important challenge to animate a highly realistic face with simulated hair and cloth, to display hundred of thousands of real-time animated humans on a standard computer, and it is still not possible to render animated characters on most mobile devices. The focus of this chapter is to present dynamically adaptive real-time facial animation techniques. We discuss methods to automatically and dynamically control the processing and memory loads together with the visual realism of rendered motions for real-time facial animation. Such approaches would theoretically allow us to free additional resources for hair or cloth animation; for instance, it should also achieve real-time performance for facial animation on multiplatform and on lightweight devices, as well as enable improvements to virtual environments with the addition of more and more facially animated humans in a single scene.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call