Abstract

This paper creates a Chinese interactive virtual character based on multi-modal mapping and rules, which receives information from the input modules and generates audio and visual speech, face expressions and body animations. The audio and visual speech are synthesized from the input text by multi-modal mapping, while face expressions and body movements are rule-based driven by emotion states. All of the original animations are captured by a motion capture system and plotted into a character model, which is created by the 3D creation software. We use a skeletal open source animation engine to create the scene in which the virtual character can talk like human communicating with users. The whole expression of this virtual character is considered very natural and realistic.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call