Abstract

Multilingual NMT has developed rapidly, but still has performance degradation caused by language diversity and model capacity constraints. To achieve the competitive accuracy of multilingual translation despite such limitations, knowledge distillation, which improves the student network by matching the teacher network’s output, has been applied and shown enhancement by focusing on the important parts of the teacher distribution. However, existing knowledge distillation methods for multilingual NMT rarely consider the knowledge, which has an important function as the student model’s target, in the process. In this article, we propose two distillation strategies that effectively use the knowledge to improve the accuracy of multilingual NMT. First, we introduce a language-family-based approach, guiding to select appropriate knowledge for each language pair. By distilling the knowledge of multilingual teachers that each processes a group of languages classified by language families, the multilingual model overcomes accuracy degradation caused by linguistic diversity. Second, we propose target-oriented knowledge distillation, which intensively focuses on the ground-truth target of knowledge with a penalty strategy. Our method provides a sensible distillation by penalizing samples without actual targets, while additionally targeting the ground-truth targets. Experiments using TED Talk datasets demonstrate the effectiveness of our method with BLEU scores increment. Discussions of distilled knowledge and further observations of the methods also validate our results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call