Abstract

This work proposes a plausible approach for a humanoid robot to define its own body based on visuomotor correlation. The high correlation of motion between vision and proprioception informs the robot that a visually moving object is related to the motor function of its own body. When the robot finds a motor-correlated object during motor exploration, visuomotor cues such as body posture and the visual features of the object are stored in visuomotor memory. Then, the robot developmentally defines its own body without prior knowledge on body appearances and kinematics. Body definition is also adaptable for an extended body such as a tool that the robot is grasping. The body movements are generated in the manner of stochastic motor babbling, whereas visuomotor memory biases the babbling to keep the body parts in sight. This ego-attracted bias helps the robot explore the joint space more efficiently. After motor exploration, visuomotor memory allows the robot to anticipate a visual image of its own body from a motor command. The proposed approach was experimentally evaluated with humanoid robot iCub.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.