Abstract

The prior research includes the development of a speech-driven embodied entrainment computer-generated character called InterActor, which automatically generates communicative motions and actions such as nods for entrained interaction from voice rhythm based on only speech input. Because the conventional InterActor character generates only positive actions from the verbal content but no negative actions, it is possible to aggravate a speaker's negative emotions by performing positive gestures in response to negative verbal content. In this paper, we develop an advanced speech-driven embodied entrainment character system that can respond to and improve the speaker's emotional state using speech recognition. In this system, the speaker's words are converted to text by speech recognition; their emotions are then estimated from character strings in the converted text. The system uses a database that quantifies each word and estimates the emotion associated with it. Then, the system automatically generates negative/positive motions based on the semantic orientations of words in utterance as well as entrained motions. Furthermore, we demonstrate the effectiveness of the system through three experiments: two role-play experiments for one user involving positive/negative scenarios, and a communication experiment for two remote users using the developed system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call