Abstract

Creating a convincing affective robot behavior is a challenging task. In this paper, we are trying to coordinate between different modalities of communication: speech, facial expressions, and gestures to make the robot interact with human users in an expressive manner. The proposed system employs videos to induce target emotions in the participants so as to start interactive discussions between each participant and the robot around the content of each video. During each experiment of interaction, the expressive ALICE robot generates an adapted multimodal behavior to the affective content of the video, and the participant evaluates its characteristics at the end of the experiment. This study discusses the multimodality of the robot behavior and its positive effect on the clarity of the emotional content of interaction. Moreover, it provides personality and gender-based evaluations of the emotional expressivity of the generated behavior so as to investigate the way it was perceived by the introverted–extroverted and male–female participants within a human–robot interaction context.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call