Abstract

Infants as young as 2 months can integrate audio and visual aspects of speech articulation. A shift of attention from the eyes towards the mouth of talking faces occurs around 6 months of age in monolingual infants. However, it is unknown whether this pattern of attention during audiovisual speech processing is influenced by speech and language experience in infancy. The present study investigated this question by analysing audiovisual speech processing in three groups of 4‐ to 8‐month‐old infants who differed in their language experience: monolinguals, unimodal bilinguals (infants exposed to two or more spoken languages) and bimodal bilinguals (hearing infants with Deaf mothers). Eye‐tracking was used to study patterns of face scanning while infants were viewing faces articulating syllables with congruent, incongruent and silent auditory tracks. Monolinguals and unimodal bilinguals increased their attention to the mouth of talking faces between 4 and 8 months, while bimodal bilinguals did not show any age difference in their scanning patterns. Moreover, older (6.6 to 8 months), but not younger, monolinguals (4 to 6.5 months) showed increased visual attention to the mouth of faces articulating audiovisually incongruent rather than congruent faces, indicating surprise or novelty. In contrast, no audiovisual congruency effect was found in unimodal or bimodal bilinguals. Results suggest that speech and language experience influences audiovisual integration in infancy. Specifically, reduced or more variable experience of audiovisual speech from the primary caregiver may lead to less sensitivity to the integration of audio and visual cues of speech articulation.

Highlights

  • A few weeks after birth and several months before they begin producing canonical babbling, infants can perceptually integrate audio and visual cues of speech articulation

  • These findings suggest that language experience influences the representation of audiovisual speech and that infants learning two auditory phonological systems may be more sensitive to visual cues of articulation

  • Infants were from three groups with different language experience: 28 monolingual infants with hearing parents (12 girls, mean age = 6.2 months), unimodal bilingual infants with hearing parents (8 girls, mean age = 6.1 months) and bimodal bilingual infants with a Deaf mother (14 girls; mean age = 6.3 months)

Read more

Summary

| INTRODUCTION

A few weeks after birth and several months before they begin producing canonical babbling, infants can perceptually integrate audio and visual cues of speech articulation. 8-­month-­old bilinguals are better at distinguishing two different languages when silently articulated than monolingual infants of the same age (Sebastián-­ Gallés, Albareda-­Castellot, Weikum, & Werker, 2012; Weikum et al, 2007) These findings suggest that language experience influences the representation of audiovisual speech and that infants learning two auditory phonological systems may be more sensitive to visual cues of articulation. Infants with and without experience of sign language, as well as adult signers, focus mainly on the face and not the hands when perceiving sign language (De Filippo & Lansing, 2006; Emmorey, Thompson, & Colvin, 2008; Muir & Richardson, 2005; Palmer, Fais, Golinkoff, & Werker, 2012) This increased attention to the face is the hypothesized mechanism for enhancement of certain aspects of face processing in Deaf and hearing signers compared to non-­signers (Bettger, Emmorey, McCullough, & Bellugi, 1997; Emmorey, 2001; McCullough & Emmorey, 1997; Stoll et al, 2018). Bimodal bilinguals were expected to show less sensitivity to audiovisual incongruences in syllable articulation and experience a delay in the age of their shift of visual attention to the mouth

| Participants
| Procedure
Findings
| DISCUSSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call