Abstract

Correctly interpreting an interlocutor's emotional expression is paramount to a successful interaction. But what happens when one of the interlocutors is a machine? The facilitation of human-machine communication and cooperation is of growing importance as smartphones, autonomous cars, or social robots increasingly pervade human social spaces. Previous research has shown that emotionally expressive virtual characters generally elicit higher cooperation and trust than 'neutral' ones. Since emotional expressions are multi-modal, and given that virtual characters can be designed to our liking in all their components, would a mismatch in the emotion expressed in the face and voice influence people's cooperation with a virtual character? We developed a game where people had to cooperate with a virtual character in order to survive on the moon. The character's face and voice were designed to either smile or not, resulting in 4 conditions: smiling voice and face, neutral voice and face, smiling voice only (neutral face), smiling face only (neutral voice). The experiment was set up in a museum over the course of several weeks; we report preliminary results from over 500 visitors, showing that people tend to trust the virtual character in the mismatched condition with the smiling face and neutral voice more. This might be because the two channels express different aspects of an emotion, as previously suggested.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call