Abstract

The aim of this paper is to evaluate the quality of popular machine translation engines on three texts of different genre in a scenario in which both source and target languages are morphologically rich. Translations are obtained from Google Translate and Microsoft Bing engines and German-Croatian is selected as the language pair. The analysis entails both human and automatic evaluation. The process of error analysis, which is time-consuming and often tiresome, is conducted in the user-friendly Windows 10 application TREAT. Prior to annotation, training is conducted in order to familiarize the annotator with MQM, which is used in the annotation task, and the interface of TREAT. The annotation guidelines elaborated with examples are provided. The evaluation is also conducted with automatic metrics BLEU and CHRF++ in order to assess their segment-level correlation with human annotations on three different levels–accuracy, mistranslation, and the total number of errors. Our findings indicate that neither the total number of errors, nor the most prominent error category and subcategory, show consistent and statistically significant segment-level correlation with the selected automatic metrics.

Highlights

  • Machine translation (MT) is used on a daily basis by millions of people and for a range of use cases [1]

  • After Neural Machine Translation (NMT) took over the world scene from its predecessor phrase-based statistical MT (SMT), there have been a lot of research initiatives which focus on translation error types in an attempt to better describe differences between these two approaches

  • Automatic metrics employed in the paper depend on the availability of human reference translations

Read more

Summary

Introduction

Machine translation (MT) is used on a daily basis by millions of people and for a range of use cases [1]. It will not replace humans any time soon, it can be used as a tool to enhance productivity [2]. Different types of data mean variations in structure, genre, and style, and can result in MT outputs of quite different quality. After Neural Machine Translation (NMT) took over the world scene from its predecessor phrase-based statistical MT (SMT), there have been a lot of research initiatives which focus on translation error types in an attempt to better describe differences between these two approaches. A review of translation quality definitions is given in [4]

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call