Abstract

This work is a first step towards an integration ofmultimodality with the aim to make efficient use of both human-like, and non-human-like feedback modalities in order to optimize proactive information retrieval from task-related Human-Robot Interaction (HRI) in human environments. The presented approach combines the human-like modalities speech and emotional facial mimicry with non-human-like modalities. The proposed non-human-like modalities are a screen displaying retrieved knowledge of the robot to the human and a pointer mounted above the robot head for pointing directions and referring to objects in shared visual space as an equivalent for arm and hand gestures. Initially, pre-interaction feedback is explored in an experiment investigating different approach behaviors in order to find socially acceptable trajectories to increase the success of interactions and thus efficiency of information retrieval. Secondly, pre-evaluated humanlike modalities are introduced. First results of a multimodal feedback study are presented in the context of the IURO project,1where a robot asks for its way to a predefined goal location.1. Interactive Urban Robot, http://www.iuro-project.eu

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.