Abstract
Eye-hand coordination is complicated by the fact that the eyes are constantly in motion relative to the head. This poses problems in interpreting the spatial information gathered from the retinas and using this to guide hand motion. In particular, eye-centered visual information must somehow be spatially updated across eye movements to be useful for future actions, and these representations must then be transformed into commands appropriate for arm motion. In this review, we present evidence that early visuomotor representations for arm movement are remapped relative to the gaze direction during each saccade. We find that this mechanism holds for targets in both far and near visual space. We then show how the brain incorporates the three dimensional, rotary geometry of the eyes when interpreting retinal images and transforming these into commands for arm movement. Next, we explore the possibility that hand-eye alignment is optimized for the eye with the best field of view. Finally, we describe how head orientation influences the linkage between oculocentric visual frames and bodycentric motor frames. These findings are framed in terms of our ‘conversion-on-demand’ model, in which only those representations selected for action are put through the complex visuomotor transformations required for interaction with objects in personal space, thus providing a virtual on-line map of visuomotor space.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.