Abstract

Recent investigations have focused on the spatiotemporal dynamics of visual recognition by appealing to pattern analysis of EEG signals. While this work has established the ability to decode identity-level information (such as the identity of a face or of a word) from neural signals, much less is known about the precise nature of the signals that support such feats, their robustness across visual categories, or their consistency across human participants. Here, we address these questions through the use of EEG-based decoding and multivariate feature selection as applied to three visual categories: words, faces and face ensembles (i.e., crowds of faces). Specifically, we use recursive feature elimination to estimate the diagnosticity of time and frequency-based EEG features for identity-level decoding across three datasets targeting each of the three categories. We then relate feature diagnosticity across categories and across participants while, also, aiming to increase decoding performance and reliability. Our investigation shows that word and face processing are similar in their reliance on spatiotemporal information provided by occipitotemporal channels. In contrast, ensemble processing appears to also rely on central channels and exhibits a similar profile with word processing in the frequency domain. Further, we find that feature diagnosticity is stable across participants and is even capable of supporting cross-participant feature selection, as demonstrated by systematic boosts in decoding accuracy and feature reduction. Thus, our investigation sheds new light on the nature and the structure of the information underlying identity-level visual processing as well as on its generality across categories and participants.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call