Abstract
Seeing Suzie bite an apple or reading the sentence 'Suzie munched the apple' both convey a similar idea. But is there a common neural basis for action comprehension when generated through video or text? The current study used functional magnetic resonance imaging to address this question. Participants observed videos or read sentences that described two categories of actions: eating and cleaning. A conjunction analysis of video and sentence stimuli revealed that cleaning actions (compared to eating actions) showed a greater response in dorsal frontoparietal regions, as well as within the medial fusiform gyrus. These findings reveal supramodal representations of perceived actions in the human brain, which are specific to action categories and independent of input modality (video or written words). In addition, some brain regions associated with cleaning and eating actions showed an interaction with modality, which was manifested as a greater sensitivity for video compared with sentence stimuli. Together, this pattern of results demonstrates both supramodal and modality-sensitive representations of action categories in the human brain, a finding with implications for how we understand other people's actions from video and written sources.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.