Abstract

One of the most critical areas of Natural Language Processing study is machine translation, which realizes the translation process from source language to target language. After a rapid development, neural machine translation has become mainstream. Most of the tasks of natural language generation are implemented based on sequence to sequence model. The current technology of machine translation is the use of recurrent neural networks and attention mechanisms, which is essentially a set of encoding and decoding techniques. In order to handle the issue of excessive information compression introduced by decoder, attention mechanism is proposed. Transformer is also built with attention, which not only computes faster, but also achieves better results on translation tasks. In this paper, a series of comparison of these models and description of the performance on the dataset were shown.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call