Abstract

Though there is a strong consensus that word length and frequency are the most important single-word features determining visual-orthographic access to the mental lexicon, there is less agreement as how to best capture syntactic and semantic factors. The traditional approach in cognitive reading research assumes that word predictability from sentence context is best captured by cloze completion probability (CCP) derived from human performance data. We review recent research suggesting that probabilistic language models provide deeper explanations for syntactic and semantic effects than CCP. Then we compare CCP with three probabilistic language models for predicting word viewing times in an English and a German eye tracking sample: (1) Symbolic n-gram models consolidate syntactic and semantic short-range relations by computing the probability of a word to occur, given two preceding words. (2) Topic models rely on subsymbolic representations to capture long-range semantic similarity by word co-occurrence counts in documents. (3) In recurrent neural networks (RNNs), the subsymbolic units are trained to predict the next word, given all preceding words in the sentences. To examine lexical retrieval, these models were used to predict single fixation durations and gaze durations to capture rapidly successful and standard lexical access, and total viewing time to capture late semantic integration. The linear item-level analyses showed greater correlations of all language models with all eye-movement measures than CCP. Then we examined non-linear relations between the different types of predictability and the reading times using generalized additive models. N-gram and RNN probabilities of the present word more consistently predicted reading performance compared with topic models or CCP. For the effects of last-word probability on current-word viewing times, we obtained the best results with n-gram models. Such count-based models seem to best capture short-range access that is still underway when the eyes move on to the subsequent word. The prediction-trained RNN models, in contrast, better predicted early preprocessing of the next word. In sum, our results demonstrate that the different language models account for differential cognitive processes during reading. We discuss these algorithmically concrete blueprints of lexical consolidation as theoretically deep explanations for human reading.

Highlights

  • Concerning the influence of single-word properties, there is a strong consensus in the word recognition literature that word length and frequency are the most reliable predictors of lexical access (e.g., Reichle et al, 2003; New et al, 2006; Adelman and Brown, 2008; Brysbaert et al, 2011)

  • We focused on eye movements and aimed to replicate the single-fixation duration (SFD) effects with a second sample, which was published by Schilling et al (1998)

  • In addition to the logit-transformed cloze completion probability (CCP) and the log10-transformed language model probabilities (Kliegl et al, 2006; Smith and Levy, 2013), we explored the correlations of the non-transformed probability values with SFD, gaze duration (GD) and total viewing time (TVT) data, respectively: In the SRC data set, CCP provided correlations of −0.28, −0.33, and −0.39; n-gram models of −0.11, −0.16 and −0.21; topic models of −0.35, −0.47 and −0.52; and recurrent neural networks (RNNs) models provided correlations of −0.16, −0.23, and −0.25, respectively

Read more

Summary

Introduction

Concerning the influence of single-word properties, there is a strong consensus in the word recognition literature that word length and frequency are the most reliable predictors of lexical access (e.g., Reichle et al, 2003; New et al, 2006; Adelman and Brown, 2008; Brysbaert et al, 2011). The traditional psychological predictor variables are based on human performance. When aiming to quantify how syntactic and semantic contextual word features influence the reading of the present word, Taylor’s (1953) cloze completion probability (CCP) still represents the performance-based state of the art for predicting sentence reading in psychological research (Kutas and Federmeier, 2011; Staub, 2015). Participants of a preexperimental study are given a sentence with a missing word, and the relative number of participants completing the respective word are taken to define CCP. This human performance is used to account for another human performance such as reading. When two directly observable variables, such as CCP and reading times, are connected, for instance Feigl (1945, p. 285) suggests that this corresponds to a ‘“low-grade’ explanation.” Models of eye movement control, were “not intended to be a deep explanation of language processing, [. . . because they do] not account for the many effects of higher-level linguistic processing on eye movements” (Reichle et al, 2003, p. 450)

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call