Abstract

People categorize objects more slowly when visual input is highly impoverished instead of optimal. While bottom-up models may explain a decision with optimal input, perceptual hypothesis testing (PHT) theories implicate top-down processes with impoverished input. Brain mechanisms and the time course of PHT are largely unknown. This event-related potential study used a neuroimaging paradigm that implicated prefrontal cortex in top-down modulation of occipitotemporal cortex. Subjects categorized more impoverished and less impoverished real and pseudo objects. PHT theories predict larger impoverishment effects for real than pseudo objects because top-down processes modulate knowledge only for real objects, but different PHT variants predict different timing. Consistent with parietal-prefrontal PHT variants, around 250 ms, the earliest impoverished real object interaction started on an N3 complex, which reflects interactive cortical activity for object cognition. N3 impoverishment effects localized to both prefrontal and occipitotemporal cortex for real objects only. The N3 also showed knowledge effects by 230 ms that localized to occipitotemporal cortex. Later effects reflected (a) word meaning in temporal cortex during the N400, (b) internal evaluation of prior decision and memory processes and secondary higher-order memory involving anterotemporal parts of a default mode network during posterior positivity (P600), and (c) response related activity in posterior cingulate during an anterior slow wave (SW) after 700 ms. Finally, response activity in supplementary motor area during a posterior SW after 900 ms showed impoverishment effects that correlated with RTs. Convergent evidence from studies of vision, memory, and mental imagery which reflects purely top-down inputs, indicates that the N3 reflects the critical top-down processes of PHT. A hybrid multiple-state interactive, PHT and decision theory best explains the visual constancy of object cognition.

Highlights

  • People categorize objects accurately even when visual input is impoverished, for example, due to fog, poor lighting, or unusual viewing angles

  • Subjective probability that each picture could be categorized can affect event-related potentials (ERPs), such as P300like potentials (e.g., P600, LPC) (Johnson, 1986), so, to assess this, response rates were computed collapsed across both object types

  • Results showed that subjects decided that they could categorize about half of the pictures: 50.0% categorized vs. 49.0% uncategorized [levels 3–5, F(1, 18) = 0.13, p = 0.72]

Read more

Summary

Introduction

People categorize objects accurately (e.g., car, dog, hat) even when visual input is impoverished, for example, due to fog, poor lighting, or unusual viewing angles. They show remarkable visual constancy of categorization: People maintain high accuracy despite suboptimal viewing conditions, though performance is slower with impoverished than optimal visual stimuli (Palmer et al, 1981; Tarr et al, 1998). Recent evidence implicates additional top-down feedback modulations onto posterior information processing areas in order to explain human performance fully, especially under more impoverished conditions (Kosslyn et al, 1994), in which case bottom-up models underperform people (Serre et al, 2007a). The time when the visual constancy of object cognition is achieved under non-optimal conditions in humans has received relatively little attention

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call