Abstract

Pointing at something refers to orienting the hand, the arm, the head or the body in the direction of an object or an event. This skill constitutes a basic communicative ability for cognitive agents like, e.g. humanoid robots. The goal of this study is to show that approximate and, in particular, precise pointing can be learned as a direct mapping from the object's pixel coordinates in the visual field to hand positions or to joint angles. This highly nonlinear mapping defines the pose and orientation of a robot's arm. The study underlines that this is possible without calculating the object's depth and 3D position explicitly since only the direction is required. To this aim, three state-of-the-art neural network paradigms (multilayer perceptron, extreme learning machine and reservoir computing) are evaluated on real world data gathered from the humanoid robot iCub. Training data are interactively generated and recorded from kinesthetic teaching for the case of precise pointing. Successful generalization is verified on the iCub using a laser pointer attached to its hand.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.