Abstract

The development of Machine Translation (MT) systems and their application in performing translation projects gave a crucial position to the evaluation of these systems’ outputs. Recently, the Google Translate MT system added the central accent of the Kurdish language to its language list. The current study is an attempt to evaluate the acceptability of the translated texts produced by the system. Different text typologies have been considered for the study's data. To evaluate the MT outputs, the Bilingual Evaluation Understudy (BLEU) evaluation model has been administered. The findings show that the performance of the understudy MT system in the translation of English into the Sorani accent of Kurdish is affected by some linguistic and technical hindrances, which in general affect the acceptability of translated text.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call