Abstract

AbstractIn contrast to symbol-manipulation approaches, Cognitive Linguistics offers a modal rather than an amodal account of meaning in language. From this perspective, the meanings attached to linguistic expressions, in the form of conceptualisations, have various properties in common with visual forms of representation. This makes Cognitive Linguistics a potentially useful framework for identifying and analysing language-image relations in multimodal texts. In this paper, we investigate language-image relations with a specific focus onintersemiotic convergence. Analogous with research on gesture, we extend the notion ofco-text imagesand argue that images and language usages which are proximal to one another in a multimodal text can be expected to exhibit the same or consistent construals of the target scene. We outline some of the dimensions of conceptualisation along which intersemiotic convergence may be enacted in texts, including event-structure, viewpoint, distribution of attention and metaphor. We take as illustrative data photographs and their captions in online news texts covering a range of topics including immigration, political protests, and inter-state conflict. Our analysis suggests the utility of Cognitive Linguistics in allowing new potential sites of intersemiotic convergence to be identified and in proffering an account of language-image relations that is based in language cognition.

Highlights

  • Within linguistics, many paradigms have undergone a multimodal turn to view language as only one part of a much broader communicative complex and to include within their analytical purviews other non-linguistic modes

  • Our analysis suggests the utility of Cognitive Linguistics in allowing new potential sites of intersemiotic convergence to be identified and in proffering an account of language-image relations that is based in language cognition

  • We focus on intersemiotic convergence, as it is realised across several dimensions of construal, and as it occurs in another multimodal text-type, namely news photographs and their captions

Read more

Summary

Introduction

Many paradigms have undergone a multimodal turn to view language as only one part of a much broader communicative complex and to include within their analytical purviews other non-linguistic modes. From a Cognitive Linguistics perspective, the shared form that characterises this echoic relation does not reside in the linguistic and visual structures of the text per se but between images and the mental imagery, in the form of conceptualisations, which both language usages and images instantiate. This view helps to address Forceville’s (1999: 170) concern that SFL approaches “compare visual structures too much with surface language instead of with the mental processes of which both surface language and images are the perceptible manifestations”. We see any reduplication between the two modes as being multi-dimensional and scalar rather than absolute and, prefer the term intersemiotic convergence which seems to more accurately capture this idea

Multimodality in cognitive linguistics
Data and scope
Schematisation
Viewpoint
Windowing of attention
Metaphor
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.