Abstract

Despite fast development of machine translation, the output quality is less than acceptable in certain language pairs. The aim of this paper is to determine the types of errors in machine translation output that cause comprehension problems to potential readers. The study is based on a reading task experiment using eye tracking and a retrospective survey as a complementary method to add more value to the research as eye tracking as a method is considered to be problematic and challenging (O’BRIEN, 2009; ALVES et al., 2009). The cognitive evaluation approach is used in an eye tracking experiment to determine the complexity of the errors in the English–Lithuanian language pair from easiest to hardest as seen by the readers of a machine-translated text. The tested parameters – gaze time and fixation count – demonstrate that a different amount of cognitive effort is required to process different types of errors in machine-translated texts. The current work aims at contributing to other research in the Translation Studies field by providing the analysis of error assessment of machine translation output.

Highlights

  • Eye tracking research methodology is not free of complexity and ambiguity, many studies in translation research rely on eye tracking as it has been long ago assumed and many a time proven that cognitive effort is reflected well by eye movement (ALVES et al, 2009; O’BRIEN, 2009; HVELPLUND, 2017)

  • The analysis of the findings demonstrated that the segments with machine translation errors required longer gaze times and more fixations than the segments with no errors

  • The main aim was to determine by way of an eye tracking experiment the types of errors that cause understanding problems to potential readers in order to evaluate the machine translation output

Read more

Summary

Introduction

Eye tracking research methodology is not free of complexity and ambiguity, many studies in translation research rely on eye tracking as it has been long ago assumed and many a time proven that cognitive effort is reflected well by eye movement (ALVES et al, 2009; O’BRIEN, 2009; HVELPLUND, 2017). Kornacki (2019) has contributed to the field by trying to determine the applicability of eye tracking methodology in a computer-based translation classroom. Such studies are still not numerous and take various research designs. Studies on the cognitive effort of machine-translated output, many of which employ eye tracking methodology, are abounding (CARL et al, 2011, 2015; DAEMS et al, 2017; GONÇALVES, 2016; O’BRIEN, 2006, 2011; MOORKENS, 2018; SPECIA, 2011; CASTILHO, 2016). The authors have found that less cognitive effort is required when an interactive machine translation workbench is used (ALVES et al, 2016)

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call