Abstract

Open-domain dialog generation, which is a crucial component of artificial intelligence, is an essential and challenging problem. In this article, we present a personalized dialog system, which leverages the advantages of multitask learning and reinforcement learning for personalized dialogue generation (MRPDG). Specifically, MRPDG consists of two subtasks: 1) an author profiling module that recognizes user characteristics from the input sentence (auxiliary task) and 2) a personalized dialog generation system that generates informative, grammatical, and coherent responses with reinforcement learning algorithms (primary task). Three kinds of rewards are proposed to generate high-quality conversations. We investigate the effectiveness of three widely used reinforcement learning methods [i.e., Q-learning, policy gradient, and actor-critic (AC) algorithm] in a personalized dialog generation system and demonstrate that the AC algorithm achieves the best results on the underlying framework. Comprehensive experiments are conducted to evaluate the performance of the proposed model on two real-life data sets. Experimental results illustrate that MRPDG is able to produce high-quality personalized dialogs for users with different characteristics. Quantitatively, the proposed model can achieve better performance than the compared methods across different evaluation metrics, such as the human evaluation, BiLingual Evaluation Understudy (BLEU), and perplexity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call