Abstract

An eye-tracking methodology was used to explore adults’ and children’s use of two utterance-based cues to overcome referential uncertainty in real time. Participants were first introduced to two characters with distinct color preferences. These characters then produced fluent (“Look! Look at the blicket.”) or disfluent (“Look! Look at thee, uh, blicket.”) instructions referring to novel objects in a display containing both talker-preferred and talker-dispreferred colored items. Adults (Expt 1, n = 24) directed a greater proportion of looks to talker-preferred objects during the initial portion of the utterance (“Look! Look at…”), reflecting the use of indexical cues for talker identity. However, they immediately reduced consideration of an object bearing the talker’s preferred color when the talker was disfluent, suggesting they infer disfluency would be more likely as a talker describes dispreferred objects. Like adults, 5-year-olds (Expt 2, n = 27) directed more attention to talker-preferred objects during the initial portion of the utterance. Children’s initial predictions, however, were not modulated when disfluency was encountered. Together, these results demonstrate that adults, but not 5-year-olds, can act on information from two talker-produced cues within an utterance, talker preference, and speech disfluencies, to establish reference.

Highlights

  • IntroductionImagine that a mother and her preschooler are baking a cake, and the mother instructs her child to “Pass the spatula!” How might the child, who does not know what a spatula is, identify the intended referent from among the many possible kitchen objects that are unfamiliar? various speaker-produced behaviors provide cues that help young word learners identify the intended referent of a novel word, including eye gaze direction (e.g., Baldwin, 1991, 1993; Graham et al, 2010), gestures (e.g., O’Neill et al, 2002), facial expressions (e.g., Akhtar and Tomasello, 2000; Henderson and Graham, 2005; Graham et al, 2006), and emotional prosody (Berman et al, 2013a)

  • The goal of the present study was to examine the effect that preference information and disfluency cues have on listeners’ expectations about talkers’ referential intent toward novel objects

  • We examined whether adults and children would use cues based on knowledge of talkers’ genderstereotyped color preferences and talkers’ voice characteristics in conjunction with cues based on filled pause disfluencies to overcome referential uncertainty in the context of novel words in real time

Read more

Summary

Introduction

Imagine that a mother and her preschooler are baking a cake, and the mother instructs her child to “Pass the spatula!” How might the child, who does not know what a spatula is, identify the intended referent from among the many possible kitchen objects that are unfamiliar? various speaker-produced behaviors provide cues that help young word learners identify the intended referent of a novel word, including eye gaze direction (e.g., Baldwin, 1991, 1993; Graham et al, 2010), gestures (e.g., O’Neill et al, 2002), facial expressions (e.g., Akhtar and Tomasello, 2000; Henderson and Graham, 2005; Graham et al, 2006), and emotional prosody (Berman et al, 2013a). Saylor and Troseth (2006) introduced 3-year-olds to pairs of novel toys with an experimenter indicating which toy she preferred (e.g., “I like this one!”). In a related line of research, studies have demonstrated that preschoolers can use their knowledge of talker preferences to guide realtime referential processing (e.g., Creel, 2012, 2014; Borovsky and Creel, 2014). (Note that the characters themselves were no longer depicted at this point.) children drew upon a set of associations (acoustic voice characteristics → talker → preferences) to help identify relevant referents in real time. Borovsky and Creel (2014) demonstrated that 3- to 10-year-old children, like adults, generate similar expectations when the associations involve generic knowledge instead of explicit preference information. Referential predictions were generated quickly, upon hearing the talker’s voice

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call