Abstract

This paper presents a framework for multimodal human-robot interaction. The proposed framework is being implemented in a personal robot called Maggie, developed at RoboticsLab of the University Carlos III of Madrid for social interaction research. The control architecture of this personal robot is a hybrid control architecture called AD (Automatic-Deliberative) that incorporates an Emotion Control System (ECS) Maggie?s main goal is to interact establish a peer-to-peer relationship with humans. To achieve this goal, a set of human-robot interaction skills are developed based on the proposed framework. The human-robot interaction skills imply tactile, visual, remote voice and sound modes. The multi-modal fusion and synchronization are also presented in this paper.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call