Abstract

This paper presents a self-organizing neural network model for visuo-motor coordination of a redundant humanoid robot arm in reaching tasks. The proposed approach is based on a biologically-inspired model which replicates some characteristics of human control: learning occurs through an action-perception cycle and does not requires explicit knowledge of the geometry of the manipulator. The transformation learned is a mapping from spatial movement direction to joint rotation. During learning, the system creates relations between the motor data associated to endogenous movements performed by the robotic arm and the sensory consequences of such motor actions, i.e. the final position and orientation of the end effector. The learnt relations are stored in the neural map structure and are then used, after learning, for generating motor commands aimed at reaching a given point in 3D space. The work is an extension of (E. Guglielmelli, et al.) including the end-effector orientation control. Experimental trials confirmed the system capability to control the end effector position and orientation and also to manage the redundancy of the robotic manipulator in reaching the 3D target point even with additional constraints, such as one or more clamped joints without additional learning phases

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call