Abstract

This paper provides a comprehensive analysis of the first shared task on End-to-End Natural Language Generation (NLG) and identifies avenues for future research based on the results. This shared task aimed to assess whether recent end-to-end NLG systems can generate more complex output by learning from datasets containing higher lexical richness, syntactic complexity and diverse discourse phenomena. Introducing novel automatic and human metrics, we compare 62 systems submitted by 17 institutions, covering a wide range of approaches, including machine learning architectures – with the majority implementing sequence-to-sequence models (seq2seq) – as well as systems based on grammatical rules and templates. Seq2seq-based systems have demonstrated a great potential for NLG in the challenge. We find that seq2seq systems generally score high in terms of word-overlap metrics and human evaluations of naturalness – with the winning Slug system (Juraska et al., 2018) being seq2seq-based. However, vanilla seq2seq models often fail to correctly express a given meaning representation if they lack a strong semantic control mechanism applied during decoding. Moreover, seq2seq models can be outperformed by hand-engineered systems in terms of overall quality, as well as complexity, length and diversity of outputs. This research has influenced, inspired and motivated a number of recent studies outwith the original competition, which we also summarise as part of this paper.

Highlights

  • This paper provides a comprehensive final report and extended analysis of the first shared task on End-to-End (E2E) Natural Language Generation (NLG), substantially extending previous reports (Novikova and Rieser, 2016; Novikova et al, 2017b; Dusek et al, 2018)

  • There is no significant difference in the time taken to collect data with pictorial vs. textual meaning representations (MRs)

  • Utterances produced from pictorial MRs were considered to be significantly (p < 0.001) more natural and better phrased than utterances collected with textual MRs

Read more

Summary

Introduction

This paper provides a comprehensive final report and extended analysis of the first shared task on End-to-End (E2E) Natural Language Generation (NLG), substantially extending previous reports (Novikova and Rieser, 2016; Novikova et al, 2017b; Dusek et al, 2018). In addition to this previous work, we provide a corrected and extended evaluation of the training dataset, as well as a detailed discussion of how current stateof-the-art systems address E2E generation challenges, including semantic accuracy and diversity of outputs, and a comparison of techniques used by the submitted systems with systems outside the competition. This paper accompanies a release of all the participating systems’ outputs on the test set along with the human ratings collected in the evaluation campaign

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call