Accumulating evidence has suggested the existence of a human action recognition system involving inferior frontal, parietal, and superior temporal regions that may participate in both the perception and execution of actions. However, little is known about the specificity of this system in response to different forms of human action. Here we present data from PET neuroimaging studies from passive viewing of three distinct action types, intransitive self-oriented actions (e.g., stretching, rubbing one’s eyes, etc.), transitive object-oriented actions (e.g., opening a door, lifting a cup to the lips to drink), and the abstract, symbolic actions-signs used in American Sign Language. Our results show that these different classes of human actions engage a frontal/parietal/STS human action recognition system in a highly similar fashion. However, the results indicate that this neural consistency across motion classes is true primarily for hearing subjects. Data from deaf signers shows a non-uniform response to different classes of human actions. As expected, deaf signers engaged left-hemisphere perisylvian language areas during the perception of signed language signs. Surprisingly, these subjects did not engage the expected frontal/parietal/STS circuitry during passive viewing of non-linguistic actions, but rather reliably activated middle–occipital temporal–ventral regions which are known to participate in the detection of human bodies, faces, and movements. Comparisons with data from hearing subjects establish statistically significant contributions of middle–occipital temporal–ventral during the processing of non-linguistic actions in deaf signers. These results suggest that during human motion processing, deaf individuals may engage specialized neural systems that allow for rapid, online differentiation of meaningful linguistic actions from non-linguistic human movements.
Read full abstract