Abstract

At present, researchers generally focus on the problems of each module of the task-oriented dialogue systems, which is independent of each other. The underlying module cannot use the error information of the upper module. What’s more, these methods need manually labeled data, which is created by hand and often too slow and expensive. In this paper, we propose an end-to-end approach to solve these problems. Based on the hierarchical structure of multi-turn dialogue systems, we construct two levels of attention mechanisms–one at the word level and one at the sentence level, which can make good use of context information. At the same time, we identify five key modeling and training techniques, and apply them to our model, yielding a new model of better performance in multi-turn dialogue systems. With evaluations on the Jing Dong Customer Service dataset1 and the Ubuntu Dialogue Corpus dataset [1], we show that our model improves the conversation performance over previous end-to-end dialogue system methods.1It is a multi-turn conversation dataset for the 2018 JD Dialog Challenge

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call