Abstract

Abstract This paper analyzes the “encoder-decoder” framework in neural machine translation and clarifies that the task of natural language processing is sequence learning. Secondly, recurrent neural networks are used to combine the historical hidden layer output information with the current input information, which is specialized in processing sequence data to achieve good translation results. Applying the attention mechanism to the field of natural language processing, a Transformer model based on the full attention mechanism is constructed in order to achieve the purpose of translating the source language while also performing alignment operations on the target language. The evaluation and analysis of the Transformer model based on the full-attention mechanism concludes that the Transformer model has 0.0152 Pearson correlation coefficients higher than the Bilingual Expert model, which is also 2.92% higher than the Bilingual Expert model, with the participation of f feature in both models. This further proves the Transformer model’s ability to correctly and effectively translate English sentences. At the same time, it also shows that the application of natural language processing technology can improve the efficiency of English long-sentence translation and comprehensively improve the quality of long-sentence translation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call