Abstract

AbstractInformation‐theoretical complexity metrics are auxiliary hypotheses that link theories of parsing and grammar to potentially observable measurements such as reading times and neural signals. This review article considers two such metrics, Surprisal and Entropy Reduction, which are respectively built upon the two most natural notions of ‘information value’ for an observed event (Blachman ). This review sketches their conceptual background and touches on their relationship to other theories in cognitive science. It characterizes them as ‘lenses’ through which theorists ‘see’ the information‐processing consequences of linguistic grammars. While these metrics are not themselves parsing algorithms, the review identifies candidate mechanisms that have been proposed for both of them.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call