Abstract

In this article, we discuss the possibility of applying corpus-based methods to the analysis of a complex multimodal translation product like film audio description. For this purpose, results are presented from two research projects in which a corpus of audio described films has been compiled and tagged by means of a multimodal annotation software called Taggetti. A multimodal concordancing tool has been developed for an efficient exploitation of the corpus. The data presented in this article show how the narratological structure, i.e. the world evoked by narrative representations, and the filmic language, which includes the way this world is seen by the camera and the montage, are translated in the audio description and how these levels combine and interact in the multimodal meaning-making process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call