Abstract

Knowledge-grounded conversation models aim at generating informative responses for the given dialogue context, based on external knowledge. To generate an informative and context-coherent response, it is important to conjugate dialogue context and external knowledge in a balanced manner. However, existing studies have paid less attention to finding appropriate knowledge sentences from external knowledge sources than to generating proper sentences with correct dialogue acts. In this paper, we propose two knowledge selection strategies: 1) Reduce-Match and 2) Match-Reduce and explore several neural knowledge-grounded conversation models based on each strategy. Models based on Reduce-Match strategy first distill the whole dialogue context into a single vector with salient features preserved and then compare this context vector with the representation of knowledge sentences to predict a relevant knowledge sentence. Models based on Match-Reduce strategy first match every turn of the context with knowledge sentences to capture fine-grained interactions and aggregate them while minimizing information loss to predict the knowledge sentence. Experimental results show that conversation models using each of our knowledge selection strategies outperform the competitive baselines not only in terms of knowledge selection accuracy but also in response generation performance. Our best model based on Match-Reduce outperforms the baselines in the comparative studies with the Wizard of Wikipedia dataset. Also, our best model based on Reduce-Match outperforms them with the CMU Document Grounded Conversations dataset.

Highlights

  • Knowledge-grounded conversation models aim at generating informative responses for the given dialogue context, based on external knowledge

  • We propose two knowledge selection strategies: 1) Reduce-Match and 2) Match-Reduce, and explore several knowledge selection methods based on these strategies by using the same neural knowledge-grounded conversation model

  • We performed experiments with various implementation in two publicly available datasets and showed that our best models could outperform the state-of-the-art models

Read more

Summary

Introduction

Knowledge-grounded conversation models aim at generating informative responses for the given dialogue context, based on external knowledge. Conversation models are required to naturally grasp the current flow of dialogue to generate human-like responses. The model must identify knowledge sentences that fit the dialogue context. Response-2 is both contextcoherent and informative, while Response-1 conveys not much useful information though it is context-coherent. By conjugating dialogue context and external knowledge in a balanced manner, the model can generate a coherent

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call