Abstract

Service robots have already been able to implement explicit and simple tasks assigned by human beings, but they still lack the ability to act like humans who can analyze the assigned task and ask questions to acquire supplementary information to resolve ambiguities from the environment. Inspired by this point, we fuse verbal language and pointing gesture information to enable a robot to execute a vague task, such as “bring me the book”. In this paper, we propose a system integrating human-robot dialogue, mapping and action execution planning in unknown 3D environments. We consider grounding natural language commands to a sequence of low-level instructions that can be executed by the robot. To express the targets location which is pointed to by the user in a global fixed frame, we use a SLAM approach to build the environment map. Experimental results demonstrate that a humanoid robot NAO can acquire the skill based on our proposed approach in unknown environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call