Abstract

How do infants learn the sounds of their native language when there are many simultaneous sounds competing for their attention? Adults and children detect when speech sounds change in complex scenes better than when other sounds change. We examined whether infants have similar biases to detect when human speech changes better than nonspeech sounds including musical instruments, water, and animal calls in complex auditory scenes. We used a change deafness paradigm to examine whether 5-month-olds' change detection is biased toward certain sounds within high-level categories (e.g., biological or generated by humans) or whether change detection depends on low-level salient physical features such that detection is better for sounds with more distinct acoustic properties, such as water. In Experiment 1, 5-month-olds showed some evidence for detecting speech and music changes better than no change trials. In Experiment 2, when speech and music were compared separately with animal and water sounds, infants detected when speech and water changed, but not when music changed across scenes. Infants' change detection is both biased for certain sound categories, as they detected small speech changes better than other sounds, and affected by the size of the acoustic change, similar to young infants' attentional priorities in complex visual scenes. By 5 months, infants show some preferential processing of speech changes in complex auditory environments, which could help bootstrap the language learning process. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call