Abstract

Dialogue state tracking and dialogue response generation are two crucial modules in a task-oriented dialogue system. Recent works on task-oriented dialogue systems have mainly focused on studying the two modules separately without considering the end-to-end joint learning of them. For dialogue state tracking, many methods analyze the entire dialogue history, and this may result in larger memory usage and computational costs with the increase of dialogue histories. For dialogue response generation, oracle labels of the dialogue state are directly used to generate the response regardless of the vital impact the dialogue state tracking on it. In this paper, we propose a Probabilistic Graph-based end-to-end model combined with Variational Auto-Encoder(PGVAE), which jointly trains the dialogue state tracking and the dialogue response generation using the simplified dialogue context rather than the entire dialogue history. We address the issue of the loss of contextual information because of ignoring the history of dialogue by utilizing the dialogue state of the last turn as a summary of the history. Experimental results on real-world datasets of MultiWOZ 2.0 and MultiWOZ 2.1 demonstrate the effectiveness of the proposed model for task-oriented dialogue systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.