Abstract

Emotions play a critical role in numerous processes, including, but not limited to, social interactions. Consequently, the ability to evoke and recognize emotions is a challenging task with widespread implications, notably in the field of mental health assessment systems. However, up until now, emotional elicitation methods have not utilized simulated open social conversations. Our study introduces a comprehensive Virtual Human (VH), equipped with a realistic avatar and conversational abilities based on a Large Language Model. This architecture integrates psychological constructs – such as personality, mood, and attitudes – with emotional facial expressions, lip synchronization, and voice synthesis. All these features are embedded into a modular, cognitively-inspired framework, specifically designed for voice-based semi-guided emotional conversations in real time. The validation process involved an experiment with 64 participants interacting with six distinct VHs, each designed to provoke a different basic emotion. The system took an average of 4.44 s to generate the VH’s response. Participants assessed the naturalness and realism of the conversation, scoring averages of 4.61 and 4.44 out of 7, respectively. The VHs successfully generated the intended emotional valence in the users, while arousal was not evoked, though it could be recognized in the VHs. Our findings underscore the feasibility of employing VHs within affective computing to elicit emotions in socially and ecologically valid contexts. This development holds significant potential for application in sectors such as health, education, and marketing, among others.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call