Abstract

ABSTRACT Human raters’ assessment of interpreting is a complex process. Previous researchers have mainly relied on verbal reports to examine this process. To advance our understanding, we conducted an empirical study, collecting raters’ eye-movement and retrospection data in a computerised interpreting assessment in which three groups of raters (n = 35) used an analytic rubric to assess 12 English-to-Chinese consecutive interpretations. We examined how the raters interacted with the source text, the rating scale, and the audio player displayed on the computer screen when they were assessing. We found that a) the source text and the rating scale were competing for the raters’ visual attention, with the former attracting more attention than the latter across the rater groups; b) when the raters were consulting the rating scale, they fixated less frequently on the sub-scale of target language quality than the other two sub-scales; c) the rater groups did not seem to exhibit substantially discrepant gazing behaviours overall, although there emerged different eye-movement patterns concerning certain sub-scales; and d) the raters utilised an array of strategies and shortcuts to facilitate their assessment. We discuss these findings in relation to rater training and validation of score meaning for interpreting assessment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call