Abstract

The recognition of object categories is effortlessly accomplished in everyday life, yet its neural underpinnings remain not fully understood. In this electroencephalography (EEG) study, we used single-trial classification to perform a Representational Similarity Analysis (RSA) of categorical representation of objects in human visual cortex. Brain responses were recorded while participants viewed a set of 72 photographs of objects with a planned category structure. The Representational Dissimilarity Matrix (RDM) used for RSA was derived from confusions of a linear classifier operating on single EEG trials. In contrast to past studies, which used pairwise correlation or classification to derive the RDM, we used confusion matrices from multi-class classifications, which provided novel self-similarity measures that were used to derive the overall size of the representational space. We additionally performed classifications on subsets of the brain response in order to identify spatial and temporal EEG components that best discriminated object categories and exemplars. Results from category-level classifications revealed that brain responses to images of human faces formed the most distinct category, while responses to images from the two inanimate categories formed a single category cluster. Exemplar-level classifications produced a broadly similar category structure, as well as sub-clusters corresponding to natural language categories. Spatiotemporal components of the brain response that differentiated exemplars within a category were found to differ from those implicated in differentiating between categories. Our results show that a classification approach can be successfully applied to single-trial scalp-recorded EEG to recover fine-grained object category structure, as well as to identify interpretable spatiotemporal components underlying object processing. Finally, object category can be decoded from purely temporal information recorded at single electrodes.

Highlights

  • Recognizing objects from different categories is of fundamental importance for survival

  • We reduced the quantity of inanimate images in the set in order to ensure that each image category would contain the same number of exemplars— twelve images from each of the following six categories: Human Body (HB), Human Face (HF), Animal Body (AB), Animal Face (AF), Fruit Vegetable (FV), and Inanimate Object (IO)

  • Initial classifications were performed at the category level, meaning that each EEG trial was labeled with the category—rather than the exemplar name—of its corresponding stimulus

Read more

Summary

Introduction

Recognizing objects from different categories is of fundamental importance for survival. These regions comprise a large area of occipital cortex anterior and lateral to early visual cortex, subdivided into ventral-temporal and lateraloccipitotemporal regions [7] Within these regions lie foci that are preferentially responsive to images of objects from different categories such as faces, houses, body parts, or scenes [8,9,10]. Recent electroencephalography (EEG) and magnetoencephalography (MEG) studies have investigated category selectivity using multivariate pattern classification This approach does not require preselection of spatial or temporal components of the brain response for analysis, but rather enables the full response to be analyzed at once [27,28,29], and can provide a data-driven means of identifying spatial, temporal, and spectral components underlying category discrimination [30,31,32]. We focus our analysis on two commonly compared image categories—human faces and objects—and compare between-category and within-category object processing

Ethics Statement
Participants and Stimuli
Results
Discussion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.