Abstract

Human support robots (mobile robots able to perform useful domestic manipulative tasks) might be better accepted by people if they can communicate in ways they naturally understand: e.g. speech, but also facial expressions, postures, among others. Subtle (unconscious) mirroring of nonverbal cues during conversations promotes rapport building, essential for good communication. We investigate whether, as in human-human communication, the ability of a robot to mirror its user's head movements and facial expressions in real time can improve the user's experience with it. We describe the technical integration of a Toyota Human Support Robot (HSR) with a facially expressive 3D embodied conversational agent (ECA) (named ECA-HSR). The HSR and the ECA are aware of the user's head movements and facial emotions, and can mirror them, in real time. We then discuss a user study we designed in which participants interacted with ECA-HSR in a simple social dialog task with three conditions: mirroring of user's head movements, mirroring of user's facial emotions, and mirroring of both user's head movements and facial emotions. Our results suggest that interacting with an ECA-HSR that mirrors both the user's head movements and the facial expressions is preferred over the other conditions. Among other insights, the study revealed that the accuracy of open source, real-time recognition of facial expressions of emotion needs improvement for the best user's acceptance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.