Abstract

Recent research on multimodal discourse has explored the nature of semantic relations between different semiotic resources. Drawing on the interpretation of language as a social semiotic resource, this article proposes Intersemiotic Texture as the crucial property of coherent multimodal texts and presents a preliminary framework for cohesive devices between language and images. The framework is illustrated through examination of print media to demonstrate how the image–text relations are meta-functionally orchestrated across experiential, textual and logical meanings at the discourse stratum. A discourse-based model is suggested to analyze image–text logical relations complementary to existing grammar-based approaches. This research also develops a meta-language to describe Intersemiotic Cohesive Devices from two complementary perspectives: Intersemiotic Cohesion not only functions to integrate different modes together when multimodal discourse is conceptualized as a finished product, it also constitutes essential text-forming resources for semantic expansions across language and images during the ongoing contextualization of meanings in real time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call