Abstract

A task-oriented dialogue system (TOD) is an important application of artificial intelligence. In the past few years, works on personalized TODs have attracted increased research attention and have seen much progress. The main challenge of such dialogue systems is finding ways to exploit user profiles under conditions of fixed and monolithic dialogue style. However, most of the existing works overlook the observation that the personalization capability of the dialogue system is fundamental and generic, and they treat all attributes of the user profile equally throughout the dialogue flows, which makes them inadequate for developing a well-performing personalized TOD. In this paper, we propose a two-stage learning framework equipped with GPT2 as a backbone to alleviate the above two problems. In the first learning phase, we finetune the GPT2 model on the personalized open-domain dialogues to preliminarily acquire personalization power. Then, we transfer this power to personalized task-oriented dialogues for the second stage of learning. After that, we enable the proposed model to be capable of the desired personalization capacity. Moreover, we present a dynamic profile fusion mechanism and an auxiliary task that detects which attributes contribute to the current utterance to facilitate the model’s performance. Eventually, we rewrite the attribute descriptions of user profiles in sentences to mitigate the consistency gap between the open-domain and task-oriented dialogues. The experimental results show that the proposed model achieves superior results compared to the state-of-the-art models on two versions of the Personalized bAbI dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call