Abstract
ABSTRACT Understanding spoken language requires rapid analysis of incoming information at multiple levels. Information at lower levels (e.g. acoustic/phonetic) cascades forward to affect processing at higher levels (e.g. lexical/semantic), and higher-level information may feed back to influence lower-level processing. Most studies have sought to examine a single stage of processing in isolation. Consequently, there is a poor understanding of how different stages relate temporally. In the present study, we characterise multiple stages of linguistic processing simultaneously as they unfold. Listeners (N = 30) completed a priming task while we collected their EEG, where a picture (e.g. of a peach) biased them to expect a target word from a minimal pair (e.g. beach/peach). We examine the processes of perceptual gradiency, semantic integration, and top-down feedback, to yield a more complete understanding of how these processes relate in time. Then, we discuss how the results from simplified priming paradigms may compare to more naturalistic settings.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have