This experimental study investigates the potential impact of employing automatic speech recognition (ASR) and speech translation (ST) in consecutive interpreting (CI) through the use of a computer-assisted interpreting (CAI) tool. The tool used is Sight-Terp, an ASR-supported CAI tool developed and designed by the first author of this study. It offers multiple features, such as ASR, real-time ST, named entity highlighting, and automatically enumerated segmentation. The methodology adopted in this research involves a within-subjects design, assessing participants’ output in scenarios with and without the use of Sight-Terp on a tablet. Twelve participants were recruited for the experimental setup and asked to interpret four English speeches into Turkish in long CI mode, using Sight-Terp for two of them and a pen and paper for the other two. The data analysis is grounded on parameters of both accuracy and fluency. In distinguishing the variance in accuracy across the two settings, accuracy metrics were rooted in the mean count of correctly rendered semantic units (units of meaning), as defined by Seleskovitch (1989). On the other hand, fluency was quantified by tracking the frequency of disfluency markers, including false starts, frequency of filled pauses, filler words, whole-word repetitions, broken words, and incomplete phrases in each session. The results show that the integration of ASR into two CI tasks led to an enhancement in the accuracy of the participants’ rendition. Concurrently, however, it led to an increase in disfluencies and extended task durations compared to the tasks in which Sight-Terp was not used. The study outcomes also suggest potential areas of improvement and modifications that could further enhance the utility of the tool. Future empirical studies using Sight-Terp will tell us more about the feasibility of ASR in the interpreting process and cognitive aspects of human-machine interaction in CI.
Read full abstract