Abstract

This work is an evaluation of machine translation engines completed in 2018 and 2021, inspired by Isabelle, Cherry & Foster (2017), and Isabelle & Kuhn (2018). The challenge consisted of testing MTs Google Translate and Bing and DeepL in the translation of certain linguistic problems normally found when translating from Spanish into English. The divergences representing a “challenge” to the engines were of morphological and lexical-syntactical types. The absolute winner of the challenge was DeepL, in second place was Bing from Microsoft, and Google was the engine that was the poorest in the management of the linguistic problems. In terms of time, when comparing the engines three years apart, it was found that DeepL was the only one that enhanced its performance by correcting a problem it had before in a test sentence. This was not the case for the other two, on the contrary, their translations were of lower quality. These machines do not seem to be consistent in the manner in which they are improved. These findings may be valuable for translators who may work with these systems as pre or post-editors so that their efforts may be better directed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.