Abstract
Five experiments are reported which investigate the distribution of selective attention to verbal and non-verbal components of an utterance when conflicting information exists in these channels. A Stroop-type interference paradigm is adopted in which attributes from the verbal and non-verbal dimensions are placed into conflict. Static directional (deictic) gestures and corresponding spoken and written words show symmetrical interference (Experiments 1, 2 and 3) as do directional arrows and spoken words (Experiment 4). This symmetry is maintained when the task is switched from a manual key press to a verbal naming response (Experiment 5) suggesting the mutual influence of the two dimensions is independent of spatial stimulus response compatibility. It is concluded that the results are consistent with a model of interference where information from pointing gestures and speech are integrated prior to the response selection stage of processing. Cross-Modal Interference 3 In the mid 1980’s a number of authors published work which criticised the widely held view of gestures and other non-verbal behaviours as “body-language”. For instance Rime (1983) and McNeill (1985) challenged the notion, largely established by Argyle (e.g. Argyle, 1975), that gestures form part of a system of body movements which might offer a privileged means of knowing and perceiving one another, a system thought to follow its own laws and transmit affective, cognitive and regulating mechanisms distinct from those carried by any accompanying speech. McNeill’s (1985) article suggested that gestures and speech, far from being psychologically distinct, “share a computational stage; they are, accordingly, parts of the same psychological structure” (p. 350). This prompted rebuttals from Feyereisen (1987) and Butterworth & Hadar (1989) with accompanying replies from McNeill (McNeill, 1987b, 1989). Most seem to agree that gesture production depends, to some extent, on the mechanisms responsible for speech production (see also Rime, 1983; Kendon, 1983). The arguments centred around specifying the locus of the interaction, elaborating McNeill’s conception of inner speech as the shared computational stage. This work represented a shift in emphasis from the social impact of nonverbal behavior to an approach which sought to examine the processes underlying the performance of body movements and, in particular, the relationships between these processes and the structures mediating vocal behavior. However, despite a relatively large amount of research on gesture and speech production, the field of gesture comprehension remains a “neglected field in cognitive psychology” (Feyereisen 1991, p.57). The main aim of this study was to begin to redress this imbalance by studying the comprehension of gestures within an information processing framework. More specifically we ask whether gestures performed concurrently with spoken and written words influence the processing of that verbal signal and reciprocally whether verbal processing modifies the processing of the gestural component of the utterance.
Accepted Version (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have