Abstract
This study analyses the interplay of various communication modes that enables emotions to be transmitted efficiently from source (ST) to target text (TT) in audio description (AD) as a multimodal text. It draws on existing experimental designs, including neutral or emotional conditions based on the congruency of stimuli from various modes (images, dialogue semantic content or prosody in a film, together with the semantic content of AD). This article reviews the methodological contribution that Social Neuroscience could make to the study of multimodal translation. To this end, some neurobiological models and studies are quoted regarding multimodal emotional information processing (Brück, Kreifelts, & Wildgruber, 2011), the impact of multimodal emotional processing on subjects’ empathy (Regenbogen et al., 2012) and the dynamics of neural networks involved in human empathy and communication through the presentation of multimodal stimuli (Regenbogen, Habel, & Kellerman, 2013). Finally, an experimental design that focuses on the transfer of feelings and emotions in film AD, which would be suitable for a potential pilot study, is presented.
Highlights
1.1 Cognitive processes underlying multimodal translationIn recent times, Translation Studies has adopted the concept of multimodality as a new approach to tackling the complex cognitive reality present in any translation process, in particular in those processes involving changes in communication codes or the intervention of different modes of expression in the source (ST) as much as in the target text (TT)
Since this study aims to analyse the transmission of emotions through different communication modes involved in cinema audio description (AD), the subject sample should consist of a group of visually impaired people
As the starting point for its development, various theoretical and methodological models from the field of Social Neuroscience were taken as reference. This discipline integrates theories and methods meant to help us understand both human cognitive and affective processes and the behaviour of individuals and groups in society. Among such models we can find Brück et al.’s (2011) proposal, which addresses the following aspects from a comprehensive neuroscience standpoint: the acoustics of emotions; the brain areas that control the processing of speech and other vocal sounds, turning human beings into voice experts by their very nature
Summary
Translation Studies has adopted the concept of multimodality as a new approach to tackling the complex cognitive reality present in any translation process, in particular in those processes involving changes in communication codes or the intervention of different modes of expression in the source (ST) as much as in the target text (TT). If we acknowledge that the brain processes reality through external multisensory inputs, the manner in which audio describers obtain access to knowledge and represent it is necessarily based on an active processing of different communication modes. This fact would redefine the translation process involved in AD as an eminently multimodal one and not as an intersemiotic one. This would require dealing with the theoretical cognitive models that account for the brain’s processing of multimodality, as presented in section 2 of this article
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Linguistica Antverpiensia, New Series – Themes in Translation Studies
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.