Abstract

We tested whether adult listeners can simultaneously keep track of variations in pitch and syllable duration in order to segment continuous speech into phrases and group these phrases into sentences. The speech stream was constructed so that prosodic cues signaled hierarchical structures (i.e., phrases embedded within sentences) and non-adjacent relations (i.e., AxB rules within phrases), while transitional probabilities between syllables favored adjacent dependencies that straddled phrase and sentence boundaries. In Experiments 1 and 2, participants hierarchically segmented the stream and learned the grammar used to generate the phrases when prosodic cues were consistent with their native language. In Experiment 3, participants segmented the stream based on transitional probabilities when no prosodic cues were present and all syllables had the same pitch and duration. In Experiment 4, participants were able to exploit non-native prosody in order to learn hierarchical relations and non-adjacent dependencies. These results suggest that prosodic cues such as pitch declination and final lengthening provide a stronger basis for learning than transitional probabilities even when both are unfamiliar and not wholly consistent with native language informational structure.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call