Abstract

This empirical corpus study explores the quality of neural machine translations (NMT) and their post-edits (NMTPE) at the German Department of the European Commission’s Directorate-General for Translation (DGT) by evaluating NMT outputs, NMTPE, and respective revisions (REV) with the automatic error annotation tool Hjerson (Popovic 2011) and the more fine-grained manual MQM framework (Lommel 2014). Results show that quality assurance measures by post-editors and revisors at the DGT are most often necessary for lexical errors. More specifically, if post-editors correct mistranslations, terminology or stylistic errors in an NMT sentence, revisors are likely to correct the same type of error in the same sentence, suggesting a certain transitivity between the NMT system and human post-editors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call