Abstract

This work presents a methodology which aims to improve and automate the process of generating facial animation for interactive applications. We propose an adaptive and semiautomatic methodology, which allows to transfer facial expressions from a face mesh to another. The model has three main stages: rigging, expression transfer and animation, where the output meshes can be used as key poses for blendshape-based animation. The input of the model is a face mesh in neutral pose and a set of face data that can be provided from different sources, such as artist crafted meshes and motion capture data. The model generates a set of blendshapes corresponding to the input set, with minimum user intervention. We opted to use a simple rig structure in order to provide a trivial correspondence either with sparse facial feature points based systems or dense geometric data supplied by RGBD based systems. The rig structure can be refined on-the-fly to deal with different input geometric data according to the need. The main contribution of this work is an adaptive methodology which aims to create facial animations with few user intervention and capable or transferring expression details according to the need and/or amount of input data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.