Abstract

Location and language have now less impact as barriers for the expansion and the spread of information around the world. Machine translators achieve such a tedious task of translation among languages in quick and reliable manners. However, if compared with human translation, issues related to semantic meanings may always arise. Different machine translators may differ in their effectiveness, and they can be evaluated either by humans or through the use of automatic methods. In this study, we attempt to evaluate the effectiveness of two popular Machine Translation (MT) systems (Google Translate and Babylon machine translation systems) to translate sentences from English to Arabic, where an automatic evaluation method called Bilingual Evaluation Understudy (BLEU) is used. Our preliminary tests indicated that Google Translate system is more effective in translating English sentences into Arabic in comparison with the Babylon MT system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call