Abstract

We put forward a general solution towards building task-oriented dialogue systems via convolutional neural networks (CNNs) and Attention Mechanisms. The dominant task-oriented dialogue models follow pipeline design and usually rely on complex recurrent neural networks(RNNs). In that case, its structure is complicated and the amount of parameters exceeds on million. In this paper, we propose a novel end-to-end task-oriented dialogue system with a clear structure, just based on CNNs and attention mechanisms, totally dispensing with recurrent neural networks (RNNs). Our model is based entirely on sequence-to-sequence (seq2seq) architecture and realizes the true meaning of end-to-end. Experimental results indicate that our model outperforms the state-of-the-art approaches on Cam-Rest676 and KVRET datasets in two evaluation indexes, namely, task completion and quality of language generation. Meanwhile, the training speed of our model is increased by 3–10 times and the amount of parameters is just 1/3.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.