Psycholinguistic literature has consistently shown that humans rely on a rich and organized understanding of event knowledge to predict the forthcoming linguistic input during online sentence comprehension. We, the authors, expect sentences to maintain coherence with the preceding context, making congruent sentence sequences easier to process than incongruent ones. It is widely known that discourse relations between sentences (e.g., temporal, contingency, comparison) are generally made explicit through specific particles, known as discourse connectives, (e.g., and, but, because, after). However, some relations that are easily accessible to the speakers, given their event knowledge, can also be left implicit. The goal of this paper is to investigate the importance of discourse connectives in the prediction of events in human language processing and pretrained language models, with a specific focus on concessives and contrastives, which signal to comprehenders that their event-related predictions have to be reversed. Inspired by previous work, we built a comprehensive set of story stimuli in Italian and Mandarin Chinese that differ in the plausibility and coherence of the situation being described and the presence or absence of a discourse connective. We collected plausibility judgments and reading times from native speakers for the stimuli. Moreover, we correlated the results of the experiments with the predictions given by computational modeling, using Surprisal scores obtained via Transformer-based language models. The human judgements were collected using a seven-point Likert scale and analyzed using cumulative link mixed modeling (CLMM), while the human reading times and language model surprisal scores were analyzed using linear mixed effects regression (LMER). We found that Chinese NLMs are sensitive to plausibility and connectives, although they struggle to reproduce expectation reversal effects due to a connective changing the plausibility of a given scenario; Italian results are even less aligned with human data, with no effects of either plausibility and connectives on Surprisal.
Read full abstract