Abstract

The grammar, or syntax, of human language is typically understood in terms of abstract hierarchical structures. However, theories of language processing that emphasize sequential information, not hierarchy, successfully model diverse phenomena. Recent work probing brain signals has shown mixed evidence for hierarchical information in some tasks. We ask whether sequential or hierarchical information guides the expectations that a human listener forms about a word’s part-of-speech when simply listening to every-day language. We compare the predictions of three computational models against electroencephalography signals recorded from human participants who listen passively to an audiobook story. We find that predictions based on hierarchical structure correlate with the human brain response above-and-beyond predictions based only on sequential information. This establishes a link between hierarchical linguistic structure and neural signals that generalizes across the range of syntactic structures found in every-day language.

Highlights

  • The hierarchical syntax of human language sets it apart from other communicative and cognitive systems [1], yet there is significant debate about the role that this syntax plays in how the brain understands and produces language in real-time [2, 3, 4]

  • Negative voltages over frontal electrodes correlate with higher hierarchical surprisal for a word’s part of speech (POS). These results are observed for content words beginning around 200 ms after the onset of a word, but not for function words

  • The context-free grammar (CFG) surprisal results are robust against alternative model paramaterizations, such as whether or not variance associated with lower-order sequential surprisals are partialed out (Fig 4, row-set 1), or whether data from participants who did not meet behavioral criteria are factored in (S1 File)

Read more

Summary

Introduction

The hierarchical syntax of human language sets it apart from other communicative and cognitive systems [1], yet there is significant debate about the role that this syntax plays in how the brain understands and produces language in real-time [2, 3, 4]. While neural data is consistent with brain systems that track hierarchical syntax rapidly and incrementally during listening [5, 6, 7, 8], studies that explicitly compare hierarchical syntax with alternatives that lack hierarchy and only encode sequential information show mixed evidence for syntax [9, 10, 11]. We contribute this debate by using electroencephalography (EEG) to test whether linguistic predictions reflect hierarchicy, not just linear sequences, even when participants do something as simple as listen to an audiobook story.

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call