Abstract

Interpreting, modeling and representing emotions is a key feature of new generation games. This paper describes the first version of the Emotional Engine we have developed as a component of more complex behavior simulators. The purpose of this module is to manage the state and behavior of the characters present in a scene while they interact with a human user. We use preexistent language recognition libraries like Windows™ Speech API, and Kinect™ devices to communicate real humans with artificial characters participating in a virtual scene. The Emotional Engine works upon numeric variables extracted from such devices and calculated after some natural language interpretation process. It then produces numerical results that lead the behavior, modify both the verbal and body language of the characters, and influence the general evolution of the scene that takes place inside the simulator. This paper presents the system architecture and discusses some key components, such as the Language Interpretation and the Body Language Interpreter modules.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.