Abstract

Perceptual experience results from a complex interplay of bottom-up input and prior knowledge about the world, yet the extent to which knowledge affects perception, the neural mechanisms underlying these effects, and the stages of processing at which these two sources of information converge, are still unclear. In several experiments we show that language, in the form of verbal labels, both aids recognition of ambiguous “Mooney” images and improves objective visual discrimination performance in a match/non-match task. We then used electroencephalography (EEG) to better understand the mechanisms of this effect. The improved discrimination of images previously labeled was accompanied by a larger occipital-parietal P1 evoked response to the meaningful versus meaningless target stimuli. Time-frequency analysis of the interval between the cue and the target stimulus revealed increases in the power of posterior alpha-band (8–14 Hz) oscillations when the meaning of the stimuli to be compared was trained. The magnitude of the pre-target alpha difference and the P1 amplitude difference were positively correlated across individuals. These results suggest that prior knowledge prepares the brain for upcoming perception via the modulation of alpha-band oscillations, and that this preparatory state influences early (~120 ms) stages of visual processing.

Highlights

  • A chief function of visual perception is to “provide a description that is useful to the viewer”[1], that is, to construct meaning[2,3]

  • Mayer and colleagues demonstrated that when the identity of a target letter could be predicted, pre-target alpha power increased over left-lateralized posterior sensors[39]. These findings suggest that alpha-band dynamics are involved in establishing perceptual predictions in anticipation of perception

  • Our findings suggest that using language to ascribe meaning to ambiguous images impacts early visual processing by biasing pre-target neural activity in the alpha-band

Read more

Summary

Results

This analysis revealed a significant positive correlation (rho = 0.52, p = 0.04, bootstrap 95% CI = [0.08, 0.82]) over left electrodes, indicating that individuals who showed a greater increase in pre-target alpha from meaning training had a larger effect of meaning on P1 amplitudes (see Fig. 6A) This relationship was not significant over right hemisphere electrodes (rho = −0.21, p = 0.42, bootstrap 95% CI = [−0.71, 0.41]; Fig. 6B). We observed no modulation of alpha power at either left (all p-values > 0.94, time-cluster corrected) or right electrode clusters (all p-values >0.35, time-cluster corrected) by the object location within the Mooney image This suggests that spatial attention is not the source of the effects of meaning training.

Discussion
Author Contributions
Additional Information
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call