Abstract

To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g., the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g., the wide opened eyes in ‘fear’; the detailed mouth in ‘happy’). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300.

Highlights

  • How visual representations evolve over time in the brain remains a challenge for cognitive neuroscience

  • We argued that the understanding of visual cognition implies an understanding of when, how and with what information the brain categorizes its visual inputs

  • In the context of a biologically relevant task–the categorization of seven facial expressions of emotion–we have shown how the brain transforms its visual inputs into a categorization-supporting construct

Read more

Summary

Introduction

How visual representations evolve over time in the brain remains a challenge for cognitive neuroscience. Recent findings suggested that the visual system could instead be more opportunistic, initially biased by task, or context, to give immediate priority to the information needed to categorize the input (i.e. diagnostic information), at whatever level of detail that information is represented [12,13,14] This change of emphasis from a fixed use of information from early spatial filters to a taskdependent, flexible account of encoding raises a number of critical questions: How does information from all spatial filters become analyzed and combined into a categorization-supporting construct? Observers performed a 7–choice expression categorization task of FACS-coded faces [26] randomly presented

Author Summary
Conclusion
Methods
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call