Abstract

Humans accurately identify observed actions despite large dynamic changes in their retinal images and a variety of visual presentation formats. A large network of brain regions in primates participates in the processing of others' actions, with the anterior intraparietal area (AIP) playing a major role in routing information about observed manipulative actions (OMAs) to the other nodes of the network. This study investigated whether the AIP also contributes to invariant coding of OMAs across different visual formats. We recorded AIP neuronal activity from two macaques while they observed videos portraying seven manipulative actions (drag, drop, grasp, push, roll, rotate, squeeze) in four visual formats. Each format resulted from the combination of two actor's body postures (standing, sitting) and two viewpoints (lateral, frontal). Out of 297 recorded units, 38% were OMA-selective in at least one format. Robust population code for viewpoint and actor's body posture emerged shortly after stimulus presentation, followed by OMA selectivity. Although we found no fully invariant OMA-selective neuron, we discovered a population code that allowed us to classify action exemplars irrespective of the visual format. This code depends on a multiplicative mixing of signals about OMA identity and visual format, particularly evidenced by a set of units maintaining a relatively stable OMA selectivity across formats despite considerable rescaling of their firing rate depending on the visual specificities of each format. These findings suggest that the AIP integrates format-dependent information and the visual features of others' actions, leading to a stable readout of observed manipulative action identity.

Highlights

  • Humans accurately identify observed actions despite large dynamic changes in their retinal images and a variety of visual presentation formats

  • In contrast to previous studies, which focused only on observed grasping actions, in a recent work we showed that the monkey AIP hosts neurons encoding specific observed manipulative actions (OMAs) and routes this information to the other nodes of the network [11]

  • We found that AIP neuronal activity provides first a robust population code for viewpoints and actor’s body postures and exhibits specificity for OMA exemplars

Read more

Summary

Introduction

Humans accurately identify observed actions despite large dynamic changes in their retinal images and a variety of visual presentation formats. We found no fully invariant OMA-selective neuron, we discovered a population code that allowed us to classify action exemplars irrespective of the visual format This code depends on a multiplicative mixing of signals about OMA identity and visual format, evidenced by a set of units maintaining a relatively stable OMA selectivity across formats despite considerable rescaling of their firing rate depending on the visual specificities of each format. These findings suggest that the AIP integrates format-dependent information and the visual features of others’ actions, leading to a stable readout of observed manipulative action identity. The neural population activity allowed us to decode action exemplars regardless of the visual format thanks to the activity of a set of units that maintain stable OMA selectivity but rescale their firing rate

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.