Abstract
Abstract Computer-assisted interpreting (CAI) tools use speech recognition and machine translation to display numbers and names on a screen or automatically suggest renditions for technical terms. One way to improve the usability of CAI tools may be to use augmented reality (AR) technology, which allows information to be displayed wherever convenient. Instead of having to look down at a tablet or a laptop, the interpreter can see the term or number projected directly into their field of vision, allowing them to maintain their focus on the speaker and the audio input. In this study, we investigated the affordances of AR in simultaneous interpreting. Nine professional conference interpreters each interpreted two technical talks: one with numerals, proper nouns and suggestions for technical terms automatically shown on an AR display and the other with an MS Word glossary on a laptop. The results indicate a hypothetical use case for AR technologies in interpreting but highlight the practical limitations, such as a lack of comfort in wearing the AR equipment, a lack of ergonomic and intuitive interaction with virtual objects, and distraction and interference with the interpreting process in the form of additional visual input.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have