Abstract
Most of current facial animation approaches largely focus on the accuracy or efficiency of their algorithms, or how to optimally utilize pre-collected facial motion data. However, human perception, the ultimate measuring stick of the visual fidelity of synthetic facial animations, was not effectively exploited in these approaches. In this paper, we present a novel perceptually guided computational framework for expressive facial animation, by bridging objective facial motion patterns with subjective perceptual outcomes. First, we construct a facial perceptual metric (FacePEM) using a hybrid of region-based facial motion analysis and statistical learning techniques. The constructed FacePEM model can automatically measure the emotional expressiveness of a facial motion sequence. We showed how the constructed FacePEM model can be effectively incorporated into various facial animation algorithms. For the sake of clear demonstrations, we choose data-driven expressive speech animation generation and expressive facial motion editing as two concrete application examples. Through a comparative user study, we showed that comparing with the traditional facial animation algorithms, the introduced perceptually guided expressive facial animation algorithms can significantly increase the emotional expressiveness and perceptual believability of synthesized facial animations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.