Abstract

Children’s naturalistic environments often contain noise, including background speech and environmental sounds, that disrupts their speed and accuracy of spoken language processing. Integrating congruent auditory and visual speech cues, a skill that improves throughout childhood, facilitates language processing in background noise. The present study implemented an integrated eye-tracking and touch-screen paradigm to explore how children utilize visual speech cues in the presence of background speech and how these behaviors change with age. Typically-developing children (ages 3–12 years), either heard (auditory-only) or heard and viewed (audiovisual) a female speaking sentences (e.g., Find the dog) in quiet or in the presence of a male two-talker speech masker at + 2 dB SPL. Children were then instructed to select an image, among a set of three, that matched the sentence-final word. During each trial, children’s eye gaze was recorded, which allowed us to quantify children’s fixations to the target versus the distractor images. These data enable us to make fine-grained measurements of children’s language processing in real time. Performance was also quantified by accuracy of target image selection. Discussion will focus on children’s spoken language processing in the presence of background speech and in the presence and absence of congruent audiovisual speech cues.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.