Abstract

Translating a language to another language has become instrumental when peoples interact with other people who speak a different language. However, the language translation is not an easy computation task when there is a language-resource gap. This paper presents empirical results on the performance of two models: the Long Short-term Memory and the Bidirectional Long Short-term Memory models as machine language translation models involving Bahasa Indonesia and the Sundanese language. The empiric results showed that the Bidirectional Long Short-term Memory model achieves higher performance as a language translator from the Sundanese language to Bahasa Indonesia and vice versa (0.95 and 0.95 average training accuracy respectively; and 0.90 and 0.89 average testing BLEU scores respectively) than the Long Short-term Memory model as a language translator from the Sundanese language to Bahasa Indonesia and vice versa (0.93 and 0.92 average training accuracy respectively; and 0.91 and 0.88 average testing BLEU scores). These results validate some previously reported studies that claim the Bidirectional Long Short-term Memory model potentially outperform the Long Short-term Memory model when it is used to process a sequence dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call