Abstract

Simultaneous interpreting with text (SIMTXT) is generally seen as being cognitively more demanding than either simultaneous interpreting (SI) or sight translation (STT). However, little research has been done on how interpreters allocate their attention proportionally during SIMTXT between their auditory and visual attention. This study examines attention patterns during SIMTXT and investigates the relationship between the interpreters’ attention patterns and the quality of the interpreting output. Nine professional interpreters were recruited for SIMTXT, interpreting six English STs into Chinese. The interpreters listened to the audio input while the transcribed text of the audio input was displayed on the screen by Translog-II. The eye movements were recorded by an eye-tracker (Tobii TX300) while their interpretations were recorded by Audacity. The gaze and the spoken data (source and target) were synchronized on a word-level and aligned with the final STs and TTs. We explored and categorized the visual and auditory attention patterns based on their ear-voice span (EVS), eye-voice span (IVS), and ear-eye span (EIS) and found three types of attention patterns in our data, namely, ear-dominant (ED), eye-dominant (ID), and ear-eye-balanced (EIB). In addition, the interpreting output was annotated by three professional interpreters based on an error taxonomy adapted from Multidimensional Quality Metrics (MQM). We found that the EIB interpreters produced the lowest translation quality in terms of total error numbers and accuracy, followed by ED, and then ID interpreters. ED interpreters produced the highest translation quality in terms of fluency, followed by EIB, and then ID interpreters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call