Abstract

The present study compares the performance of three machine translation tools in English-Arabic translation to answer the questions of (a) whether the three machine translation tools, Google Translate, Systran and Microsoft Bing can be ordered in a hierarchy of performance, and (b) whether they can handle lexically and structurally ambiguous sentences and garden path sentences. Using a number of constructed and selected English sentences, the morphosyntactic features of number, gender, case, definiteness, and humanness, agreement between cardinal numerals and their head nouns, lexically and structurally ambiguous sentences and garden path sentences are used to test the three machine translation tools for performance. The results show that (a) as far as handling the morphosyntactic features of subject-verb agreement in Standard Arabic, all three machine translation tools perform equally well, and no machine translation tool seems to perform significantly better than the other two (b) some marked features (e.g. dual number and humanness) of SA seem to pose some problems for machine translation tools, and (c) lexically and structurally ambiguous sentences and garden path sentences seem to be the most challenging sentences for the three machine translation tools.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.