This study explores whether live transcription generated with the technology of automatic-speech-recognition (ASR) can be used to facilitate simultaneous interpreting. This article reports an analysis of trainee interpreters’ perceptions based on post-task structured interviews after an eye-tracked interpreting task without live transcription in the first half and with live transcription in the second half, which was done by a group of trainee interpreters from a postgraduate professional interpreting programme. The interviews were analysed in triangulation with the eye-tracking data about their interpreting behaviours. The results show that most participants perceived live transcription beneficial, with data indicating improved performance and lowered error rates in terms of terminologies, numbers, and proper names. It is also found that while some interpreters reported that they can adeptly manage multimodal inputs, others reported challenges in optimizing their focus of attention when live transcription was provided. The overall interference score in interpreting with live transcription spikes from 9 to 13.2, suggesting fluctuating cognitive demand. Eye-tracking data further corroborate these attentional dynamics, echoing participants’ self-reported behaviours. The study points to the need for training programmes to equip interpreters with capabilities to utilize technological tools such as live transcription, ensuring optimal attention management and overall performance.
Read full abstract