Abstract

A neural network model is described for adaptive control of arm movement trajectories during visually guided reaching. The model clarifies how a child, or infant robot, can learn to reach for objects that it sees. Piaget has provided basic insights with his concept of a circular reaction. As an infant makes internally generated hand movements, the eyes automatically follow this motion. A transformation is learned between the visual representation of hand position and the motor representation of hand position. Learning of this transformation eventually enables the child to accurately reach for visually detected targets. Grossberg and Kuperstein (1989) have shown how the eye movement system can use visual error signals to correct movement parameters via cerebellar learning. Here it is shown how the arm movement system can endogenously generate movements which lead to adaptive tuning of arm control parameters. These movements also activate the target position representations that are used to learn the visou-motor transformation that controls visually guided reaching. The arm movement properties obtain in the Adaptive Vector Integration to Endpoint (AVITE) model an adaptive neural circuit based on the VITE model for arm and speech trajectory generation of Bullock and Grossberg (1988a).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call