Abstract

Investigating infants’ ability to match visual and auditory speech segments presented sequentially allows us to understand more about the type of information they encode in each domain, as well as their ability to relate the information. One previous study found that 4.5- month-old infants’ preference for visual French or German speech depended on whether they had previously heard the respective language, suggesting a remarkable ability to encode and relate audio-visual speech cues and to use these to guide their looking behavior. However, French and German differ in their prosody, meaning that perhaps, the infants did not base their matching on phonological or phonetic cues, but on prosody patterns. The present study aimed to address this issue by tracking the eye gaze of 4.5-month-old German and Swedish infants cross-culturally in an intersensory matching procedure, comparing German and Swedish, two same-rhythm-class languages differing in phonetic and phonological attributes but not in prosody. Looking times indicated that even when distinctive prosodic cues were eliminated, 4.5- month-olds were able to extract subtle language properties and sequentially match visual and heard fluent speech. This outcome was the same for different individual speakers for the two modalities, ruling out the possibility that the infants matched speech patterns specific to one individual. This study confirms a remarkably early emerging ability of infants to match auditory and visual information. The fact that the types of information were matched despite sequential presentation demonstrates that the information is retained in short term memory, and thus goes beyond purely perceptual – here-and-now processing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call