Abstract

This paper examines the interest of a developmental approach applied to the design of autonomous robots and the understanding of adaptive behaviors, such as imitation. The proposed model is a neural network architecture that learns and uses associations between vision and arm movements, even if the problem is ill posed (in the case of mapping problems between the visual space and the joints space of the arm). The central part of the model is a visuo-motor map able to represent the arm end point’s position in an ego-centered space (constrained by the vision) according to motor information (the proprioception). Sensorimotor behaviors such as tracking, pointing, spontaneous imitating, and sequences learning can then be obtained as the consequence of different internal dynamics computed on neural fields triggered by the visuo-motor map. The readout mechanism also explains how an apparently complex behavior can be generated and controlled from one simple internal dynamics and how at the same time the learning problems can be simplified. While highlighting the generic aspect of our model, we show that our robot can autonomously imitate and learn more complex sequences of gestures after the online learning of the visual and proprioceptive control of its hand extremity. Finally, we defend the idea of a co-development of imitative and sensorimotor capabilities, allowing the acquisition and the building of increasingly complex behavioral capabilities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call