Abstract

ABSTRACTMachine translation is currently undergoing a paradigm shift from statistical to neural network models. Neural machine translation (NMT) is difficult to conceptualise for translation students, especially without context. This article describes a short in-class evaluation exercise to compare statistical and neural MT, including details of student results and follow-on discussions. As part of this exercise, students carry out evaluations of two types of MT output using three translation quality assurance (TQA) metrics: adequacy, post-editing productivity, and a simple error taxonomy. In this way, the exercise introduces NMT, TQA, and post-editing. In our module, a more detailed explanation of NMT followed the evaluation.The rise of NMT has been accompanied by a good deal of media hyperbole about neural networks and machine learning, some of which has suggested that several professions, including translation, may be under threat. This evaluation exercise is intended to empower the students, and help them understand the strengths and weaknesses of this new technology. Students’ findings using several language pairs mirror those from published research, such as improved fluency and word order in NMT output, with some unpredictable problems of omission and mistranslation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.