Abstract

As children learn they use their speech to express words and their hands to gesture. This study investigates the interplay between real-time gestures and speech as children construct cognitive understanding during a hands-on science task. 12 children (M = 6, F = 6) from Kindergarten (n = 5) and first grade (n = 7) participated in this study. Each verbal utterance and gesture during the task were coded, on a complexity scale derived from dynamic skill theory. To explore the interplay between speech and gestures, we applied a cross recurrence quantification analysis (CRQA) to the two coupled time series of the skill levels of verbalizations and gestures. The analysis focused on (1) the temporal relation between gestures and speech, (2) the relative strength and direction of the interaction between gestures and speech, (3) the relative strength and direction between gestures and speech for different levels of understanding, and (4) relations between CRQA measures and other child characteristics. The results show that older and younger children differ in the (temporal) asymmetry in the gestures–speech interaction. For younger children, the balance leans more toward gestures leading speech in time, while the balance leans more toward speech leading gestures for older children. Secondly, at the group level, speech attracts gestures in a more dynamically stable fashion than vice versa, and this asymmetry in gestures and speech extends to lower and higher understanding levels. Yet, for older children, the mutual coupling between gestures and speech is more dynamically stable regarding the higher understanding levels. Gestures and speech are more synchronized in time as children are older. A higher score on schools’ language tests is related to speech attracting gestures more rigidly and more asymmetry between gestures and speech, only for the less difficult understanding levels. A higher score on math or past science tasks is related to less asymmetry between gestures and speech. The picture that emerges from our analyses suggests that the relation between gestures, speech and cognition is more complex than previously thought. We suggest that temporal differences and asymmetry in influence between gestures and speech arise from simultaneous coordination of synergies.

Highlights

  • How do children learn and develop understanding? How does cognitive change arise? In developmental psychology, this is one of the most intriguing questions, as evidenced by the considerable literature on the topic

  • Our results suggest that speech and gestures may be more tightly coupled for the older children in first grade and children with a high language score, because their speech and gesture systems are more developed

  • The reason that speech leads over gestures for these children may as well stem from this developmental process, and might be enhanced by the emphasis on language in first grade

Read more

Summary

Introduction

How do children learn and develop understanding? How does cognitive change arise? In developmental psychology, this is one of the most intriguing questions, as evidenced by the considerable literature on the topic (see for instance, Piaget and Cook, 1952; Sternberg, 1984; Perry et al, 1988; Siegler, 1989; Carey and Spelke, 1994; Vygotsky, 1994; Thelen, 2000; Gelman, 2004; Anderson et al, 2012; Van der Steen et al, 2014). It has been demonstrated that during such gesture–speech mismatches, people (children and adults) express their cognitive understanding in gestures before they are able to put them into words (Crowder and Newman, 1993; GershkoffStowe and Smith, 1997; Garber and Goldin-Meadow, 2002). Gesture–speech mismatches are especially likely to occur when a person is on the verge of learning something new. This makes them a hallmark of cognitive development (Perry et al, 1992; Goldin-Meadow, 2003), and shows that gestures and cognition are coupled as well. In the literature the explanation for this link has been attributed to gestures being a medium to express arising cognitive strategies (Goldin-Meadow et al, 1993), to highlight cognitively relevant aspects (Goldin-Meadow et al, 2012), to add action information to existing mental representations (Beilock and Goldin-Meadow, 2010), to simulate actions (Hostetter and Alibali, 2010), to decrease cognitive load during tasks (Goldin-Meadow et al, 2001) and to construct cognitive insight (Trudeau and Dixon, 2007; Stephen et al, 2009a,b; Boncoddo et al, 2010)

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.