Abstract

The conventional dialogue system is retrieval-based and its performance is directly limited by the size of dataset. Such dialogue system will give improper response if the question is out of dataset. Recently, due to the successful application of neural network in machine translation, the attention is diverted into building generative dialogue system using sequence to sequence (seq2seq) learning with neural networks. However, it is still difficult to build a satisfactory neural conversation model as sometimes the system tends to generate a general response. Nowadays, the widely employed method for dialogue generation is neural conversation model whose main structure is composed by a recurrent neural networks (RNNs) encoder-decoder. It is noticed that there is still a little work to introduce convolutional neural networks (CNNs) to neural conversation model. Considering that CNN has been used in many natural language processing (NLP) tasks and achieves great improvements, in this research we try to improve the performance of the neural conversation model by introducing a hybrid RNN-CNN encoder. The experimental result shows this architecture’s promising potential.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.