Abstract

We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.

Highlights

  • The recent spurt of powerful end-to-end-trained neural networks for Natural Language Processing (Hermann et al, 2015; Rocktaschel et al, 2016; Weston et al, 2015, a.o.) has sparked interest in tasks to measure the progress they are bringing about in genuine language understanding

  • We used a combination of four language models, chosen by availability and/or ease of training: a pre-trained recurrent neural network (RNN) (Mikolov et al, 2011) and three models trained on the Book Corpus

  • This paper introduced the new LAMBADA dataset, aimed at testing language models on their ability to take a broad discourse context into account when predicting a word

Read more

Summary

Introduction

The recent spurt of powerful end-to-end-trained neural networks for Natural Language Processing (Hermann et al, 2015; Rocktaschel et al, 2016; Weston et al, 2015, a.o.) has sparked interest in tasks to measure the progress they are bringing about in genuine language understanding. It can locally produce perfectly sensible language fragments, but it fails to take the meaning of the broader discourse context into account. We introduce the LAMBADA dataset (LAnguage Modeling Broadened to Account for Discourse Aspects). LAMBADA proposes a word prediction task where the target item is difficult to guess (for English speakers) when only the sentence in which it appears is available, but becomes easy when a broader context is presented. LAMBADA casts language understanding in the classic word prediction framework of language modeling. None of these models came even remotely close to human performance, confirming that LAMBADA is a challenging benchmark for research on automated models of natural language understanding

Related datasets
Data collection1
Dataset statistics
Dataset analysis
Modeling experiments
Method
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call