Abstract

ABSTRACT Speech is often structurally and semantically ambiguous. Here we study how the human brain uses sentence context to resolve lexical ambiguity. Twenty-one participants listened to spoken narratives while magneto-encephalography (MEG) was recorded. Stories were annotated for grammatical word class (noun, verb, adjective) under two hypothesised sources of information: “bottom-up”: the most common word class given the word’s phonology; “top-down”: the correct word class given the context. We trained a classifier on trials where the hypotheses matched (about 90%) and tested the classifier on trials where they mismatched. The classifier predicted top-down word class labels, and anti-correlated with bottom-up labels. Effects peaked ∼100 ms after word onset over mid-frontal MEG sensors. Phonetic information was encoded in parallel, though peaking later (∼200 ms). Our results support that lexical representations are built in a context-sensitive manner, which precedes sensory phonetic processing. We showcase multivariate analyses for teasing apart subtle representational distinctions from neural time series.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call