Abstract

The presumed role of the primate sensorimotor system is to transform reach targets from retinotopic to joint coordinates for producing motor output. However, the interpretation of neurophysiological data within this framework is ambiguous, and has led to the view that the underlying neural computation may lack a well-defined structure. Here, I consider a model of sensorimotor computation in which temporal as well as spatial transformations generate representations of desired limb trajectories, in visual coordinates. This computation is suggested by behavioral experiments, and its modular implementation makes predictions that are consistent with those observed in monkey posterior parietal cortex (PPC). In particular, the model provides a simple explanation for why PPC encodes reach targets in reference frames intermediate between the eye and hand, and further explains why these reference frames shift during movement. Representations in PPC are thus consistent with the orderly processing of information, provided we adopt the view that sensorimotor computation manipulates desired movement trajectories, and not desired movement endpoints.

Highlights

  • How do we reach to what we see? The answer to this sensorimotor problem may seem evident as we prepare to grab a cup in front of us

  • Does the brain explicitly plan entire movement trajectories or are these emergent properties of motor control? behavioral studies support the notion of trajectory planning for visually guided reaches, a neurobiologically plausible mechanism for this observation has been lacking

  • I show that the predictions of this model closely resemble the population responses of neurons in posterior parietal cortex, a visuomotor planning area of the monkey brain

Read more

Summary

Introduction

How do we reach to what we see? The answer to this sensorimotor problem may seem evident as we prepare to grab a cup in front of us. A Model for Intended Reach Trajectories visual coordinates, but motor commands need to be specified with respect to the arm. A reasonable assumption is that to produce movement, the brain needs to convert goal representations between these two coordinate frames, and that transitional goal representations during this process should be expressed with respect to readily identifiable, intervening parts of the body, such as the head, the body, the shoulder, and so on [1]. Recordings of neural activity from sensorimotor areas show that goals are encoded in reference frames that span the continuum between such intuitive cases. In reach-related areas of posterior parietal cortex (PPC), a neuron may encode reach goals with respect to the eye, with respect to the hand, or with respect to an arbitrary point in between these two [2,3,4]. Similar results have been reported for sensorimotor modalities in the ventral [5] and lateral [6] intraparietal regions, parietoinsular vestibular cortex [7], and superior colliculus [8]

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.