This study focuses on the English version of an ethnographic text translated by ChatGPT, a new technology that utilises Artificial Intelligence (AI). The purpose of the study is to evaluate the syntactic, semantic, and pragmatic aspects of the translation to assess the strengths and weaknesses of this translation technology presented as a formidable tool that is poised to replace translators and render them unemployed. The method used is qualitative. It interprets the text in the Source Language (SL) and evaluates the translation against criteria of fidelity to the meaning of the SL text, cohesion of the discourse in the Target Language, and respect for the cultural context. The data, manually extracted from the translated text, consists of errors and mistakes found in the translation. The data analysis is conducted following Andrew Chesterman's theory on the three aforementioned translation strategies. The results of the study reveal that contrary to the current propaganda, ChatGPT primarily engages in literal translation. It does not engage in oblique translation. Indeed, errors and mistakes of syntactic, semantic, and pragmatic nature are abundant. Procedures such as transposition, modulation, foreignization, domestication, adaptation, transediting, etc., are almost unknown to it. At the current stage, ChatGPT is a tool that contains a vast number of words and can effectively assist translators in their work. It is too early to envision a scenario where this technology would replace experienced translators. Current scientific research trends should incorporate ChatGPT.
Read full abstract