Abstract

Knowledge selection plays an important role in knowledge-grounded dialogue, which is a challenging task to generate more informative responses by leveraging external knowledge. Recently, latent variable models have been proposed to deal with the diversity of knowledge selection by using both prior and posterior distributions over knowledge and achieve promising performance. However, these models suffer from a huge gap between prior and posterior knowledge selection. Firstly, the prior selection module may not learn to select knowledge properly because of lacking the necessary posterior information. Secondly, latent variable models suffer from the exposure bias that dialogue generation is based on the knowledge selected from the posterior distribution at training but from the prior distribution at inference. Here, we deal with these issues on two aspects: (1) We enhance the prior selection module with the necessary posterior information obtained from the specially designed Posterior Information Prediction Module (PIPM); (2) We propose a Knowledge Distillation Based Training Strategy (KDBTS) to train the decoder with the knowledge selected from the prior distribution, removing the exposure bias of knowledge selection. Experimental results on two knowledge-grounded dialogue datasets show that both PIPM and KDBTS achieve performance improvement over the state-of-the-art latent variable model and their combination shows further improvement.

Highlights

  • Knowledge-grounded dialogue (Ghazvininejad et al, 2018) which leverages external knowledge to generate more informative responses, has become a popular research topic in recent years

  • We report automatic results on the Wizard of Wikipedia dataset in Table 2 and we have the following observations: (1) From row 4 and 5 we can see that the Posterior Information Prediction Module (PIPM) provides some necessary posterior information which is helpful for knowledge selection

  • (2) Comparing row 4 and 6, we see that the Knowledge Distillation Based Training Strategy (KDBTS) brings about significant improvement on generation quality by removing the exposure bias of knowledge selection

Read more

Summary

Introduction

Knowledge-grounded dialogue (Ghazvininejad et al, 2018) which leverages external knowledge to generate more informative responses, has become a popular research topic in recent years. Many researchers have studied how to effectively leverage the given knowledge to enhance dialogue understanding and/or improve dialogue generation Context Knowledge Pool. L. Child of the Wolves is a children’s novel, published in 1996, about a Siberian husky puppy that joins a wolf pack. Child of the Wolves is a children’s novel, published in 1996, about a Siberian husky puppy that joins a wolf pack Huskies are known amongst sled-dogs for their fast pulling style We firstly select a knowledge sentence ktsel ∈ Kt from the knowledge pool, leverage the selected knowledge to generate an informative response yt. We briefly describe SKT, based on which we validate the effectiveness of our approach

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.