Abstract

This study is to compare the GPT-2-based neural-network language model (NLM) and humans in processing sentences with three different types of garden-path structure: NP/S(noun phrase/sentential complement); NP/Z(noun phrase/zero complement); MV/RR(main verb/reduced relative clause). It is to see whether the surprisal values calculated from the GPT-2 NLM display a similar pattern as human reading times in processing the three types of garden-path construction at issue; the surprisal of a sentence-internal word input, measured as the negative log-likelihood of the current observation according to the autoregressive language model, is used as a measure of input difficulty. It is found in this study that like humans, the GPT-2 NLM effectively distinguishes ambiguous from unambiguous sentences in each of them. However, the GPT-2 NLM deviates drastically from humans in recognizing garden-path effects, namely, the magnitude of cognitive load induced by processing a particular type of garden-path structure. Pending further articulations on the parallelism between reading time and surprisal, the GPT-2 NLM as a language learner is yet to attain a human-like ability to discern different types of garden-path structure in a fine-grained way.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call