Abstract

This paper describes the design of a humanoid robot system that can perform daily assistive tasks. The significant issue in realizing a daily assistive humanoid robot system is to integrate an action subsystem and a recognition subsystem. We have designed a task-relevant knowledge that referred from both the action and recognition subsystems. The knowledge-based subsystem contains not only the information on an environment and objects' shapes, but also represents manipulation and navigation-related knowledge, and that for object recognition and robot localization. Since motion generation and an object recognition procedure share the same knowledge base system, the robot is capable of planning motion while recognizing objects and vice versa. This capability increases the effectiveness and robustness of the system. Three vision-guided behavior controls are presented during the execution of an action: (i) a visual self-localization to recognize the position of the robot, (ii) a visual object localization to update the object location in the model world to generate behaviors and (iii) a visual behavior verification to confirm the success of the motion. Finally, we demonstrated a kitchen service task by multiple humanoid robots to prove the broad capability and applicability of the proposed representation. The knowledge descriptions required for this demonstration consist of 13 behaviors, six objects with manipulation and visual feature knowledge, three search areas for the recognition process and two task-relevant visual behavior verification knowledge bases. We found that the description required for a kitchen service task is rather simple compared to the complexity of the demo scenario.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call