Abstract

The speed, accuracy, and adaptability of human movement depends on the brain performing an inverse kinematics transformation—i.e., a transformation from visual to joint angle coordinates—based on learning from experience. In human visually guided motion control, it is important to learn a feedback controller for the hand position error. This paper proposes two novel models that learn coordinate transformations of the human visual feedback controller with time delay. The proposed models redress drawbacks in current models because they do not rely on complex signal switching, which does not seem neurophysiologically plausible.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call