Abstract

Early stages of visual processing are carried out by neural circuits activated by simple and specific features, such as the orientation of an edge. A fundamental question in human vision is how the brain organises such intrinsically local information into meaningful properties of objects. Classic models of visual processing emphasise a one-directional flow of information from early feature-detectors to higher-level information-processing. By contrast to this view, and in line with predictive-coding models of perception, here, we provide evidence from human vision that high-level object representations dynamically interact with the earliest stages of cortical visual processing. In two experiments, we used ambiguous stimuli that, depending on the observer’s prior object-knowledge, can be perceived as either coherent objects or as a collection of meaningless patches. By manipulating object knowledge we were able to determine its impact on processing of low-level features while keeping sensory stimulation identical. Both studies demonstrate that perception of local features is facilitated in a manner consistent with an observer’s high-level object representation (i.e., with no effect on object-inconsistent features). Our results cannot be ascribed to attentional influences. Rather, they suggest that high-level object representations interact with and sharpen early feature-detectors, optimising their performance for the current perceptual context.

Highlights

  • The classical view of neurons in the early visual system is that they are selectively driven by specific perceptual features falling within a particular region of visual space[1,2]

  • While early bottom-up processing was initially thought to be shielded from higher-level modulatory effects, top-down influences of spatial[20], feature-21, and object-based attention[22] on early visual processing are well established

  • Predictive-coding models of perception have been important in challenging the idea that early vision is carried out by relatively static spatiotemporal filters, the output of which is modulated solely by attention. Rather they suggest that higher-level visual and memory representations dynamically interact with and shape early visual processing[16,31,32]

Read more

Summary

Introduction

The classical view of neurons in the early visual system is that they are selectively driven by specific perceptual features falling within a particular region of visual space[1,2]. Cells in primary visual cortex (V1) of many mammals including humans best respond to small, local edges of certain orientations, and have been characterised as being largely blind to other features, or locations outside ‘their’ patch of visual space[3,4,5,6,7,8,9] The selectivity of these neurons is achieved by differential combinations of the outputs from cells in the retina and subcortical pathway (as well as by horizontal interactions between neurons within V1 that process neighbouring parts of a visual scene)[3,4,7,10]. Predictive-coding models of perception have been important in challenging the idea that early vision is carried out by relatively static spatiotemporal filters, the output of which is modulated solely by attention Rather they suggest that higher-level visual and memory representations dynamically interact with and shape early visual processing[16,31,32]. The (task-irrelevant) high-level representation of the stimulus, within which the edge probe is embedded, can be manipulated independently by controlling whether or not an observer has prior knowledge of image content

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call