Abstract

Neural natural language generation (NNLG) from structured meaning representations has become increasingly popular in recent years. While we have seen progress with generating syntactically correct utterances that preserve semantics, various shortcomings of NNLG systems are clear: new tasks require new training data which is not available or straightforward to acquire, and model outputs are simple and may be dull and repetitive. This paper addresses these two critical challenges in NNLG by: (1) scalably (and at no cost) creating training datasets of parallel meaning representations and reference texts with rich style markup by using data from freely available and naturally descriptive user reviews, and (2) systematically exploring how the style markup enables joint control of semantic and stylistic aspects of neural model output. We present YelpNLG, a corpus of 300,000 rich, parallel meaning representations and highly stylistically varied reference texts spanning different restaurant attributes, and describe a novel methodology that can be scalably reused to generate NLG datasets for other domains. The experiments show that the models control important aspects, including lexical choice of adjectives, output length, and sentiment, allowing the models to successfully hit multiple style targets without sacrificing semantics.

Highlights

  • The increasing popularity of personal assistant dialog systems and the success of end-to-end neural models on problems such as machine translation has lead to a surge of interest around data-totext neural natural language generation (NNLG)

  • The real power of NNLG models over traditional statistical generators is their ability to produce natural language output from structured input in a completely data-driven way, without needing hand-crafted rules or templates. These models suffer from two critical bottlenecks: (1) a data bottleneck, i.e. the lack of large parallel training data of meaning representation (MR) to NL, and (2) a control bottleneck, i.e. the inability to systematically control important aspects of the generated output to allow for more stylistic variation

  • Note that we present the longest version of the MR, so the BASE, +ADJ, and +SENT models use the same MR minus the additional information

Read more

Summary

Introduction

The increasing popularity of personal assistant dialog systems and the success of end-to-end neural models on problems such as machine translation has lead to a surge of interest around data-totext neural natural language generation (NNLG). Rather than starting with a meaning representation and collecting human references, we begin with the references (in the form of review sentences), and work backwards – systematically constructing meaning representations for the sentences using dependency parses and rich sets of lexical, syntactic, and sentiment information, including ontological knowledge from DBPedia. This method uniquely exploits existing data which is naturally rich in semantic content, emotion, and varied language.

Creating the YelpNLG Corpus
Comparison to Previous Datasets
Model Design
Evaluation
Automatic Semantic Evaluation
Automatic Stylistic Evaluation
Human Quality Evaluation
Related Work
Conclusions
A Model Configurations
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call