Abstract

We consider the problem of generating 3D facial animation of characters. An efficient procedure is realized by using the motion capture data (MoCap data), which is obtained by tracking the facial markers from an actor/actress. In some cases of artistic animation, the MoCap actor/actress and the 3D character facial animation show different expressions. For example, from the original facial MoCap data of speaking, a user would like to create the character facial animation of speaking with a smirk. In this paper, we propose a new easy-to-use system for making character facial animation via MoCap data. Our system is based on the interpolation: once the character facial expressions of the starting and the ending frames are given, the intermediate frames are automatically generated by information from the MoCap data. The interpolation procedure consists of three stages. First, the time axis of animation is divided into several intervals by the fused lasso signal approximator. In the second stage, we use the kernel k-means clustering to obtain control points. Finally, the interpolation is realized by using the control points. The user can easily create a wide variety of 3D character facial expressions by changing the control points.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.