Abstract

Word embeddings play an important role in Neural Machine Translation (NMT). However, it still has a series of problems such as ignoring the prior knowledge of the association between words, relying on specific task constraints passively in parameter training, and isolating individual embedding’s learning process from one another. In this paper, we propose a new word embedding method to add the prior knowledge of the association between words to the training process, and at the same time to share the iterative training results among all word embeddings. This method is applicable to all mainstream NMT systems. In our experiments, it achieves an improvement of +0.9 BLEU points on the WMT’14 English→German task. On the Global Voices v2018q4 Spanish→Czech low-resource translation tasks, it leads to a more prominent performance improvement over the strong baselines (a +2.6 BLEU improvement on average). As another “bonus”, the new word embedding has far fewer parameters than the traditional word embedding, even as low as 15% of the parameters of the baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call