Abstract

With rapid advances in social robotics, humanoids and autonomy, robot assistants appear to be within reach. However, robots are still unable to effectively interact with humans in activities of daily living. One of the challenges is in the frequent use of multiple communication modalities when humans engage in collaborative activities. In this paper, we propose a Multimodal Interaction Manager, a framework for an assistive robot that maintains an active multimodal interaction with a human partner while performing physical collaborative tasks. The heart of our framework is a Hierarchical Bipartite Action-Transition Network (HBATN), which allows the robot to infer the state of the task and the dialogue given spoken utterances and observed pointing gestures from a human partner, and to plan its next actions. Finally, we implemented this framework on a robot to provide preliminary evidence that the robot can successfully participate in a task-oriented multimodal interaction.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.