Abstract

Neural Machine Translation (NMT) is currently the most promising approach for machine translation. The attention mechanism is a successful technique in modern Natural Language Processing (NLP), especially in tasks like machine translation. The recently proposed network architecture of the Transformer is based entirely on attention mechanisms and achieves a new state of the art results in neural machine translation, outperforming other sequence-to-sequence models. Although it is successful in a resource-rich setting, its applicability for low-resource language pairs is still debatable. Additionally when the language pair is morphologically rich and also when the corpora is multi-domain, the lack of a large parallel corpus becomes a significant barrier. In this study, we explore different NMT algorithms – Long Short Term Memory (LSTM) and Transformer based NMT, to translate the Tamil to Sinhala language pair. Where we clearly see transformer outperforms LSTM by 2.43 BLEU score for Tamil to Sinhala direction. And this work provides a preliminary comparison of statistical machine translation (SMT) and Neural Machine Translation (NMT) for Tamil to Sinhala in the open domain context.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.