Abstract

This article presents the methodology and results of a pilot study concerning the impact of three popular and widely accessible machine translation engines (developed by Google, Microsoft and DeepL companies) on the pace of post-editing work and on the general effort related to post-editing of raw MT outputs. Fourteen volunteers were asked to translate and post-edit two source texts of similar characters and levels of complexity. The results of their work were collected and compared to develop a set of quantitative and qualitative data, which was later used to make assumptions related to the general rate of postediting work and the quality of the post-edited sentences produced by the subjects. The aim of the pilot study described below was to determine whether the applied method can be successfully used in more profound studies on the quality and impact of machine translation in the English->Polish language pair and on the potential of MT solutions on the Polish translation market.

Highlights

  • Recent years have brought us considerable advances in the area of machine translation technology (MT) used to automatically translate source text materials into multiple target languages without human interference

  • The subjects were divided into three groups and asked to perform a two-stage task based on translation and post-editing of the provided source material

  • The analysis of the practical implementation of the measurement method proposed above and the results obtained with its use revealed several drawbacks that should be considered before applying the aforementioned methodology in more profound studies on the impact of MT solutions on post-editing effort and quality of the final product in the English->Polish language pair. Some of these drawbacks resulted from the constraints that influenced the pilot study15, others were caused by the lack of experience in organization of similar research

Read more

Summary

Introduction

Recent years have brought us considerable advances in the area of machine translation technology (MT) used to automatically translate source text materials into multiple target languages without human interference (cf. e.g. Bojar et al 2016). Numerous scholars interested in MT technology have been conducting research aimed at the establishment of coherent and unified methods of MT output quality assessment (Bojar et al 2016). Apart from traditional humanbased methods that, for instance, involve the evaluations of the Fluency, Adequacy and Comprehension of machine-produced translations (Han et al 2017), several Automatic Evaluation Metrics, such as BLEU or METEOR, have been developed to enable a reliable comparison of various MT engines and their efficiency with minimum human effort (Koehn 2010). As the overall quality of MT output depends on numerous factors, such as the source text type, language pair and target language (Hutchins and Somers 1992), the aforementioned task is a difficult one

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.