Abstract

Artificial agents are now increasingly part of human society, destined for schools, hospitals, and homes to perform a variety of tasks. To engage their human users, artificial agents must be equipped with essential social skills such as facial expression communication. However, many artificial agents remain limited in this ability because they are typically equipped with a narrow set of prototypical Western-centric facial expressions of emotion that lack naturalistic dynamics. Our aim is to address this challenge by equipping artificial agents with a broader repertoire of socially relevant and culturally sensitive facial expressions (e.g., complex emotions, conversational messages, social and personality traits). To this aim, we use new, data-driven and psychology-based methodologies that can reverse-engineer dynamic facial expressions using human cultural perception. We show that our human-user-centered approach can reverse engineer many different, highly recognizable, and human-like dynamic facial expressions that typically outperform the facial expressions of existing artificial agents. By objectively analyzing these dynamic facial expression models, we can also identify specific latent syntactical signalling structures that can inform the design of generative models for culture-specific and universal social face signalling. Together, our results demonstrate the utility of an interdisciplinary approach that applies data-driven, psychology-based methods to inform the social signalling generation capabilities of artificial agents. We anticipate that these methods will broaden the usability and global marketability of artificial agents and highlight the key role that psychology must continue to play in their design.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call