Abstract

The anthropomorphization of human-robot interactions is a fundamental aspect of the design of social robotics applications. This article describes how an interaction model based on multimodal signs like visual, auditory, tactile, proxemic, and others can improve the communication between humans and robots. We have examined and appropriately filtered all the robot sensory data needed to realize our interaction model. We have also paid a lot of attention to communication on the backchannel, making it both bidirectional and evident through auditory and visual signals. Our model, based on a task-level architecture, was integrated into an application called W@ICAR, which proved efficient and intuitive with people not interacting with the robot. It has been validated both from a functional and user experience point of view, showing positive results. Both the pragmatic and the hedonic estimators have shown how many users particularly appreciated the application. The model component has been implemented through Python scripts in the robot operating system environment.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.