Abstract

In this study, we create a 3D interactive virtual character based on multi-modal emotional recognition and rule based emotional synthesize techniques. This agent estimates users' emotional state by combining the information from the audio and facial expression with CART and boosting. For the output module of the agent, the voice is generated by TTS (Text-to-Speech)system by freely given text. The synchronous visual information of agent, including facial expression, head motion, gesture and body animation, are generated by multi-modal mapping from motion capture database. A kind of high level behavior markerup language(hBML) which contains five keywords is used to drive the animation of virtual agent for emotional expression. Experiments show that this virtual character is considered natural and realistic in multimodal interaction environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call