Abstract
This paper presents an approach for reproducing optimal 3-D facial expressions based on blendshape regression. It aims to improve fidelity of facial expressions but maintain the efficiency of the blendshape method, which is necessary for applications such as human-machine interaction and avatars. The method intends to optimize the given facial expression using action units (AUs) based on the facial action coding system recorded from human faces. To help capture facial movements for the target face, an intermediate model space is generated, where both the target and source AUs have the same mesh topology and vertex number. The optimization is conducted interactively in the intermediate model space through adjusting the regulating parameter. The optimized facial expression model is transferred back to the target facial model to produce the final facial expression. We demonstrate that given a sketched facial expression with rough vertex positions indicating the intended facial expression, the proposed method approaches the sketched facial expression through automatically selecting blendshapes with corresponding weights. The sketched expression model is finally approximated through AUs representing true muscle movements, which improves the fidelity of facial expressions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.