Abstract

Most methods for multi-domain adaptive neural machine translation (NMT) currently rely on mixing data from multiple domains in a single model to achieve multi-domain translation. However, this mixing can lead to imbalanced training data, causing the model to focus on training for the large-scale general domain while ignoring the scarce resources of specific domains, resulting in a decrease in translation performance. In this paper, we propose a multi-domain adaptive NMT method based on Domain Data Balancer (DDB) to address the problems of imbalanced data caused by simple fine-tuning. By adding DDB to the Transformer model, we adaptively learn the sampling distribution of each group of training data, replace the maximum likelihood estimation criterion with empirical risk minimization training, and introduce a reward-based iterative update of the bilevel optimizer based on reinforcement learning. Experimental results show that the proposed method improves the baseline model by an average of 1.55 and 0.14 BLEU (Bilingual Evaluation Understudy) scores respectively in English-German and Chinese-English multi-domain NMT.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call