Abstract

Responses in task-oriented dialogue systems often realize multiple propositions whose ultimate form depends on the use of sentence planning and discourse structuring operations. For example a recommendation may consist of an explicitly evaluative utterance e.g. Chanpen Thai is the best option, along with content related by the justification discourse relation, e.g. It has great food and service, that combines multiple propositions into a single phrase. While neural generation methods integrate sentence planning and surface realization in one end-to-end learning framework, previous work has not shown that neural generators can: (1) perform common sentence planning and discourse structuring operations; (2) make decisions as to whether to realize content in a single sentence or over multiple sentences; (3) generalize sentence planning and discourse relation operations beyond what was seen in training. We systematically create large training corpora that exhibit particular sentence planning operations and then test neural models to see what they learn. We compare models without explicit latent variables for sentence planning with ones that provide explicit supervision during training. We show that only the models with additional supervision can reproduce sentence planning and discourse operations and generalize to situations unseen in training.

Highlights

  • Neural natural language generation (NNLG) promises to simplify the process of producing high quality responses for conversational agents by relying on the neural architecture to automatically learn how to map an input meaning representation (MR) from the dialogue manager to an output utterance (Gasicet al., 2017; Sutskever et al, 2014)

  • In the case of NOSUP, we compare the number of sentences in the generated output to those in the corresponding test reference, and for PERIODCOUNT, we compare the number of sentences in the generated output to the number of sentences we explicitly encode in the MR

  • We carry out an additional experiment to test generalization of the PERIODCOUNT model, where we randomly select a set of 31 MRs from the test set, create a set instance for each possible PERIOD count value, from 1 to the N-1, where N is the number of attributes in that MR (i.e. PERIOD=1 means all attributes are realized in the same sentence, and PERIOD=N1 means that each attribute is realized in its own sentence, except for the restaurant name which is never realized in its own sentence)

Read more

Summary

Introduction

Neural natural language generation (NNLG) promises to simplify the process of producing high quality responses for conversational agents by relying on the neural architecture to automatically learn how to map an input meaning representation (MR) from the dialogue manager to an output utterance (Gasicet al., 2017; Sutskever et al, 2014). # Type Example PRICERANGE[MODERATE], AREA[RIVERSIDE], NAME[ZIZZI], FOOD[ENGLISH], EATTYPE[PUB] NEAR[AVALON], FAMILYFRIENDLY[NO]. 1 Sent Zizzi is moderately priced in riverside, it isn’t family friendly, it’s a pub, and it is an English place near Avalon. 3 Sents Moderately priced Zizzi isn’t kid friendly, it’s in riverside and it is near Avalon. 5 Sents Zizzi is moderately priced near Avalon

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call