Abstract

This paper proposes a model for visual laughter generation by the means of speaker-dependent training of Hidden Markov Models (HMMs). It is composed of the following parts: 1) facial and 2) and head motions are modeled with separate HMMs while 3) eye-blink are added as a post-processing step on the generated eyelid trajectories.The models are trained on a database of facial expressions recorded on one male subject watching humorous videos. A commercially available marker-based motion capture system was used to record the visual data. A preliminary study has shown that modeling head motion with the same transcriptions as for facial deformation is not the best choice due to the rigidness of the resulting head motion.Finally, the generated facial laughter trajectories are used to animate a 3D face model and the corresponding animation is rendered in a video. An online perception MOS test is conducted to assess the improvement compared to the previous method and to compare with the perception of ground truth trajectories. Results show that the new approach significantly outperforms the previous one.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.