Abstract

In recent years, machine translation has made great progress with the rapid development of deep learning. However, there still exists a problem of catastrophic forgetting in the field of neural machine translation, namely, a decrease in overall performance will happen when training with new data added incrementally. Many methods related to incremental learning have been proposed to solve this problem in the tasks of computer vision, but few for machine translation. In this paper, firstly, several prevailing methods relevant to incremental learning are applied into the task of machine translation, then we proposed an ensemble model to deal with the problem of catastrophic forgetting, at last, some important and authoritative metrics are used to evaluate the model performances in our experiments. The results can prove that the incremental learning is also effective in the task of neural machine translation, and the ensemble model we put forward is also capable of improving the model performance to some extent.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call