Abstract

AbstractFor many years, the NER model has been used to assess the quality of live subtitles created by respeaking on television. In this article, I present and explore the NERLE model, an adaptation of the NER for use at Live Events (LE). This new setting is a dynamic one. When subtitles created within it were assessed with the NER model, many new categories of error were seen which proved complicated to classify, rendering the regular pathways of analysis offered by the NER model insufficient. Some errors resulted from the new, more complex, workflow and set up required at events and others from the new communicative possibilities that live events offer: audience members and people speaking at events are able to interact with respeakers and react and respond to the subtitles they produce in a way that people in a television programme cannot. This change, combined with the more complex access provision and accuracy assessment that live events entail demand that additional steps are incorporated within the NER model analysis workflow to make it applicable to this setting. The article begins with a review of how the NER model developed within the landscape of subtitle accuracy analysis; next the process involved in using the scoring system of the NER model, as the basis for the NERLE model, is examined; finally, the refinements of the NERLE model are presented and the scoring of a number of scenarios is also discussed. It concludes with suggestions for further exploration and applications of the model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call