Abstract

The representation of actions within the action-observation network is thought to rely on a distributed functional organization. Furthermore, recent findings indicate that the action-observation network encodes not merely the observed motor act, but rather a representation that is independent from a specific sensory modality or sensory experience. In the present study, we wished to determine to what extent this distributed and ‘more abstract’ representation of action is truly supramodal, i.e. shares a common coding across sensory modalities. To this aim, a pattern recognition approach was employed to analyze neural responses in sighted and congenitally blind subjects during visual and/or auditory presentation of hand-made actions. Multivoxel pattern analyses-based classifiers discriminated action from non-action stimuli across sensory conditions (visual and auditory) and experimental groups (blind and sighted). Moreover, these classifiers labeled as ‘action’ the pattern of neural responses evoked during actual motor execution. Interestingly, discriminative information for the action/non action classification was located in a bilateral, but left-prevalent, network that strongly overlaps with brain regions known to form the action-observation network and the human mirror system. The ability to identify action features with a multivoxel pattern analyses-based classifier in both sighted and blind individuals and independently from the sensory modality conveying the stimuli clearly supports the hypothesis of a supramodal, distributed functional representation of actions, mainly within the action-observation network.

Highlights

  • The ability to understand others’ actions and intentions from distinct sensory clues is central for daily social interactions

  • Recent studies have proposed that the representation of actions within the premotor, inferior frontal, parietal and temporal regions of the AON may be based on a distributed and overlapping functional organization [8,9], to what has already been described for the representation of objects and sounds in other cortical areas (e.g. [10,11,12])

  • We previously showed that congenitally blind individuals activate a premotor-temporo-parietal cortical network in response to aurally presented actions that overlaps both with human mirror system (hMS) areas found in sighted subjects in response to visually and aurally presented stimuli, and with the brain response elicited by motor pantomime of the same actions [25]

Read more

Summary

Introduction

The ability to understand others’ actions and intentions from distinct sensory clues is central for daily social interactions. We previously showed that congenitally blind individuals activate a premotor-temporo-parietal cortical network in response to aurally presented actions that overlaps both with hMS areas found in sighted subjects in response to visually and aurally presented stimuli, and with the brain response elicited by motor pantomime of the same actions [25] These findings indicate that the hMS, as part of the AON, codes not merely the observed motor act, but rather a more abstract representation that is independent from a specific sensory modality or experience [17,26,27,28]. We posited that, because of the hypothesized supramodal nature of action representation, an MVPA would be able to classify action and non-action stimuli across the visual and auditory modalities and across the sighted and blind groups, and to recognize as an ‘action’ the neural patterns associated with actual motor performances

Materials and Methods
By restricting to the knock-out supramodal map
Discussion
By restricting to the knock-out map
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call