Abstract
At present, researchers generally focus on the problems of each module of the task-oriented dialogue systems, which is independent of each other. The underlying module cannot use the error information of the upper module. What’s more, these methods need manually labeled data, which is created by hand and often too slow and expensive. In this paper, we propose an end-to-end approach to solve these problems. Based on the hierarchical structure of multi-turn dialogue systems, we construct two levels of attention mechanisms–one at the word level and one at the sentence level, which can make good use of context information. At the same time, we identify five key modeling and training techniques, and apply them to our model, yielding a new model of better performance in multi-turn dialogue systems. With evaluations on the Jing Dong Customer Service dataset1 and the Ubuntu Dialogue Corpus dataset [1], we show that our model improves the conversation performance over previous end-to-end dialogue system methods.1It is a multi-turn conversation dataset for the 2018 JD Dialog Challenge
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.