Abstract

Humans communicate with different modalities. We offer an account of multi-modal meaning coordination, taking speech-gesture meaning coordination as a prototypical case. We argue that temporal synchrony (plus prosody) does not determine how to coordinate speech meaning and gesture meaning. Challenging cases are asynchrony and broadcasting cases, which are illustrated with empirical data. We propose that a process algebra account satisfies the desiderata. It models gesture and speech as independent but concurrent processes that can communicate flexibly with each other and exchange the same information more than once. The account utilizes the ψ-calculus, allowing for agents, input-output-channels, concurrent processes, and data transport of typed λ-terms. A multi-modal meaning is produced integrating speech meaning and gesture meaning into one semantic package. Two cases of meaning coordination are handled in some detail: the asynchrony between gesture and speech, and the broadcasting of gesture meaning across several dialogue contributions. This account can be generalized to other cases of multi-modal meaning. BibTeX info

Highlights

  • Before we present our own account of speech-gesture meaning coordination, we examine whether existing co-speech gesture accounts could meet all specified desiderata

  • We suggest that a process algebra account is able to cope with the independence of gesture meaning and speech meaning, cases of asynchrony, cases of blocking of information, cases of broadcasting, and an algorithmic meaning coordination

  • We have identified substantial challenges for speech-gesture meaning coordination via a temporal constraint

Read more

Summary

Introduction

Humans do communicate by speech. Information can be communicated with body postures, eye gazes, co-speech gestures, facial expressions, intonation, etc. The pieces of information communicated via different channels (e.g., visual and audio-acoustic) constitute the overall communicated meaning To formally model this idea of a multi-modal meaning, one needs a unified formal framework. The key idea is to model the dynamics of this meaning interaction in terms of independent but concurrent processes that can flexibly interact with each other As we show, such an approach has important advantages compared to other multi-modal meaning accounts. It treats gesture and speech as independent processes that operate concurrently, can communicate with each other flexibly, and can exchange the same information more than once.

A case study
Challenges for coordinating speech meaning and gesture meaning
Challenging cases
Why existing co-speech gesture accounts do not fully meet the challenges
Planners for multi-modal integration
Grammar-based accounts
SDRT accounts
Other formal pragmatic accounts
Upshot
A process algebra account of speech-gesture meaning coordination
The process algebra account
The process algebra account: A formal introduction
The round-ball example
General rendering of the λ -ψ-interaction
The λ -ψ-agents and how they interact
The role of deadlock δ
Broadcasting and multi-modal anaphora
Concluding remarks and future research
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call