Abstract

In this paper we present a human-friendly control framework and an associated system architecture for performing compliant trajectory tracking of multimodal human gesture information on a position controlled humanoid robot in realtime. The contribution of this paper includes a system architecture and control methodology that enables real-time compliant control of humanoid robots from demonstrated human motion and speech inputs. The human motion consists of the body and head pose. The human body motion, represented by a set of Cartesian space motion descriptors, is captured using a single depth camera marker-less vision processing module. The human head pose, represented by two degrees of freedom, is estimated and tracked using a single CCD camera. The architecture also enables fine motion control through human speech commands processed by a dedicated speech processing system. Motion description from the three input modes are synchronized and retargeted to the joint space coordinates of the humanoid robot in real-time. The retargeted motion adheres to the robot's kinematic constraints and represents the reference joint motion that is subsequently executed by a model based compliant control framework through a torque to position transformation system. The compliant and low gain tracking performed by this framework renders the system physically safe and therefore friendly to humans interacting with the robot. Experiments were performed on the Honda humanoid robot and the results are presented here.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call