Abstract

We review information-theoretic measures of cognitive load during sentence processing that have been used to quantify word prediction effort. Two such measures, surprisal and next-word entropy, suffer from shortcomings when employed for a predictive processing view. We propose a novel metric, lookahead information gain, that can overcome these short-comings. We estimate the different measures using probabilistic language models. Subsequently, we put them to the test by analysing how well the estimated measures predict human processing effort in three data sets of naturalistic sentence reading. Our results replicate the well known effect of surprisal on word reading effort, but do not indicate a role of next-word entropy or lookahead information gain. Our computational results suggest that, in a predictive processing system, the costs of predicting may outweigh the gains. This idea poses a potential limit to the value of a predictive mechanism for the processing of language. The result illustrates the unresolved problem of finding estimations of word-by-word prediction that, first, are truly independent of perceptual processing of the to-be-predicted words, second, are statistically reliable predictors of experimental data, and third, can be derived from more general assumptions about the cognitive processes involved.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.