Abstract
Quality-assessment models for live interlingual subtitling are virtually non-existent. In this study we investigate whether and to what extent existing models from related translation modes, more specifically the Named Entity Recognition (NER) model for intralingual live subtitling, provide a good starting point. Having conducted a survey of the major quality parameters in different forms of subtitling, we proceed to adapt this model. The model measures live intralingual quality on the basis of different types of recognition error by the speech-recognition software, and edition errors by the respeaker, with reference to their impact on the viewer’s comprehension. To test the adapted model we conducted a context-based study comprising the observation of the live interlingual subtitling process of four episodes of Dansdate, broadcast by the Flemish commercial broadcaster VTM in 2015. The process observed involved four “subtitlers”: the respeaker/interpreter, a corrector, a speech-to-text interpreter and a broadcaster, all of whom performed different functions. The data collected allow errors in the final product and in the intermediate stages to be identified: they include when and by whom they were made. The results show that the NER model can be applied to live interlingual subtitling if it is adapted to deal with errors specific to translation proper.
Highlights
Quality-assessment models for live interlingual subtitling with speech recognition are virtually nonexistent and designing such a model is a complex undertaking
One could summarize the different production processes as follows: in the pre-prepared intralingual and interlingual mode the subtitles are produced with dedicated subtitling software through a non-live rephrasing or translation process in post-production; in live intralingual subtitling augmented by speech recognition the subtitles are produced mainly with speech-to-text software through a live form of rephrasing, and in live interlingual subtitling this live feature is combined with a variant of simultaneous interpreting
The aim of this article is fourfold: 1. to review briefly the main quality parameters or criteria used in subtitling practices with which live interlingual subtitling shares some common ground, that is, intralingual and interlingual pre-prepared subtitling, and intralingual live subtitling with speech recognition, with a brief excursion into simultaneous interpreting (SI); 2. to develop a tentative quality-assessment model for interlingual live subtitling based on this review; 3. to test the usability of the model in a case study; 4. to assess the results produced by means of the procedure currently used at VTM, the main commercial Flemish broadcaster, and formulate suggestions for further research
Summary
Quality-assessment models for live interlingual subtitling with speech recognition are virtually nonexistent and designing such a model is a complex undertaking One reason for this is the relative novelty of the translation mode, which means that research is scarce and both practical experience and data are limited. As far as the process is concerned, live interlingual subtitling shares the most common ground with live intralingual subtitling Both modes require a form of “respeaking”, a procedure that has been described as follows: In an average live subtitling session, one person watches and listens to the television program[me] as it is broadcast live. It is a bit of a misnomer since the subtitler does more than respeaking alone This so-called respeaker speaks directly to a speech recognizer, which produces a draft subtitle. This can be done either by the respeaker (the Mono-LS-model) or by an additional editor who will quickly correct the output of the speech recognition program before the subtitles are broadcast (the Duo-LS-model). (Remael, Van Waes & Leijten, 2014, p. 124)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Linguistica Antverpiensia, New Series – Themes in Translation Studies
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.