Abstract

The relationships between the anatomical representation of semantic knowledge in the human brain and the timing of neurophysiological mechanisms involved in manipulating such information remain unclear. This is the case for superordinate semantic categorization—the extraction of general features shared by broad classes of exemplars (e.g., living vs. non-living semantic categories). We proposed that, because of the abstract nature of this information, input from diverse input modalities (visual or auditory, lexical or non-lexical) should converge and be processed in the same regions of the brain, at similar time scales during superordinate categorization—specifically in a network of heteromodal regions, and late in the course of the categorization process. In order to test this hypothesis, we utilized electroencephalography and event related potentials (EEG/ERP) with functional magnetic resonance imaging (fMRI) to characterize subjects' responses as they made superordinate categorical decisions (living vs. non-living) about objects presented as visual pictures or auditory words. Our results reveal that, consistent with our hypothesis, during the course of superordinate categorization, information provided by these diverse inputs appears to converge in both time and space: fMRI showed that heteromodal areas of the parietal and temporal cortices are active during categorization of both classes of stimuli. The ERP results suggest that superordinate categorization is reflected as a late positive component (LPC) with a parietal distribution and long latencies for both stimulus types. Within the areas and times in which modality independent responses were identified, some differences between living and non-living categories were observed, with a more widespread spatial extent and longer latency responses for categorization of non-living items.

Highlights

  • Semantic knowledge includes information about the features, function, and properties of objects and the categories to which they belong (Caramazza et al, 1990; Caramazza and Mahon, 2006)

  • Consistent with our hypothesis, during the course of superordinate categorization, information provided by these diverse inputs appears to converge in both time and space: Functional magnetic resonance imaging (fMRI) showed that heteromodal areas of the parietal and temporal cortices are active during categorization of both classes of stimuli

  • The fMRI results suggest that, consistent with previous reports (Chao and Martin, 2000; Martin, 2007) there are areas in the basal temporal and posterior parietal cortices responses that are significantly modulated by semantic category

Read more

Summary

Introduction

Semantic knowledge includes information about the features, function, and properties of objects and the categories to which they belong (Caramazza et al, 1990; Caramazza and Mahon, 2006). Semantic knowledge is hierarchically organized (Mervis and Crisafi, 1982), with superordinate categories having the greatest degree of generality, encapsulating information about more abstract features of a stimulus (e.g., whether it is living or nonliving) (Jolicoeur et al, 1984; Rogers and Patterson, 2007) On this basis it might be expected that when subjects are asked to select a superordinate category to which an item belongs, the abstract information derived from different input modalities (e.g., auditory or visual pictures or words) will likely be processed in a unitary, modality independent fashion—in the same brain regions, and at similar times following stimulus presentation. A single set of experiments evaluating convergent responses in both time and space, while utilizing widely divergent input modalities, has never been conducted

Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call