Abstract
Multi-turn dialogue generation aims to generate natural and fluent responses that should be consistent with multiple consecutive utterances history. It is a more challenging task compared to its single-turn counterpart since it requires the model to capture the topic drift along with the multi-turn dialogue history. In this paper, we propose a multi-turn dialogue generation model which incorporates topic drift aware information into a hierarchical encoder-decoder framework to generate coherent responses. This model first utilizes a Convolutional Neural Network (CNN) based topic model to obtain the topic representation of each utterance. Then a topic drift model is employed to encode the sequential topics of multi-turn dialogue history to infer the topic of response. During the response generation, a specially designed topic drift aware generator is proposed to dynamically balance the impact of the inferred topic of response and local word structure. Fur-thermore, we employ multi-task learning to optimize the topic drift model and dialogue generation simultaneously. Extensive experimental results on two benchmark datasets (i.e. Cornell Movie Dialog Corpus and Ubuntu Dialogue Dataset) indicate that our proposed model can generate more coherent responses, and significantly outperform other dialogue generation models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.