Abstract

During recent years, we have witnessed the start of a revolution in personal robotics. Once associated with highly specialized manufacturing tasks, robots are rapidly starting to become part of our everyday lives. The potential of these systems is far-reaching; from co-worker robots that operate and collaborate with humans side-by-side to robotic tutors in schools that interact with humans in a shared environment. All of these scenarios require systems that are able to act and react in a social way. Evidence suggests that robots should leverage channels of communication that humans understand—despite differences in physical form and capabilities. We have developed Furhat—a social robot that is able to convey several important aspects of human face-to-face interaction such as visual speech, facial expression, and eye gaze by means of facial animation that is retro-projected on a physical mask. In this presentation, we cover a series of experiments attempting to quantize the effect of our social robot and how it compares to other interaction modalities. It is shown that a number of functions ranging from low-level audio-visual speech perception to vocabulary learning improve when compared to unimodal (e.g., audio-only) settings or 2D virtual avatars.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call