Abstract

Dialogue state tracking (DST), a crucial component of the task-oriented dialogue system (TOD), is designed to track the user’s goal. Existing DST models mainly focus on monolingual dialogue input, failing to meet the growing needs of a TOD to provide multilingual services. Therefore, this paper proposes a novel Zero-shot Language Extension scenario for DST, extending the monolingual DST to multilingual DST without extra high-cost dialogue data annotation. In this scenario, the multilingual DST only needs a single shared model to handle multilingual input and generate a unified dialogue state. This setting makes deploying a complete multilingual TOD easy since it could be reused by the downstream components from existing monolingual TOD. Specifically, we achieve the language extension by multi-auxiliary-tasks fine-tuning of multilingual pre-trained models, where five relevant auxiliary tasks are jointly designed, including monolingual DST, cross-lingual DST, forward word translation, utterance recovery, and semantic similarity. The extended multilingual DST model can be enhanced through joint optimization with all the auxiliary tasks by capturing multilingual context understanding and cross-lingual alignment characteristics. Comprehensive experiments on Multilingual WOZ dataset (English → German and English → Italian) and cross-lingual MultiWOZ dataset (English → Chinese and Chinese → English) demonstrate the effectiveness and superiority of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call