Abstract

Language and music share many rhythmic properties, such as variations in intensity and duration leading to repeating patterns. Perception of rhythmic properties may rely on cognitive networks that are shared between the two domains. If so, then variability in speech rhythm perception may relate to individual differences in musicality. To examine this possibility, the present study focuses on rhythmic grouping, which is assumed to be guided by a domain-general principle, the Iambic/Trochaic law, stating that sounds alternating in intensity are grouped as strong-weak, and sounds alternating in duration are grouped as weak-strong. German listeners completed a grouping task: They heard streams of syllables alternating in intensity, duration, or neither, and had to indicate whether they perceived a strong-weak or weak-strong pattern. Moreover, their music perception abilities were measured, and they filled out a questionnaire reporting their productive musical experience. Results showed that better musical rhythm perception ability was associated with more consistent rhythmic grouping of speech, while melody perception ability and productive musical experience were not. This suggests shared cognitive procedures in the perception of rhythm in music and speech. Also, the results highlight the relevance of considering individual differences in musicality when aiming to explain variability in prosody perception.

Highlights

  • The rhythmic properties of music and language share some notable features: Both music and language are grouped into phrases that are marked by pauses as well as by differences in tone height and duration of beats and syllables (Patel, 2003)

  • Results indicate a significant effect of duration was compared to intensity (Dur-Int), the negative β suggesting that participants gave fewer trochaic responses in the duration condition than in the intensity condition

  • The aim of the present study was to investigate the link between speech rhythm processing and musicality

Read more

Summary

Introduction

The rhythmic properties of music and language share some notable features: Both music and language are grouped into phrases that are marked by pauses as well as by differences in tone height and duration of beats and syllables (Patel, 2003). The current study focuses on one rhythmic similarity between music and language: An asymmetry of cue distribution between the beginning and ends of larger units. Initial beats are marked by higher intensity, and final notes are marked by longer duration in musical phrases (Lerdahl & Jackendoff, 1983; Narmour, 1990; Todd, 1985). A similar distribution of rhythmic cues is found in metrical feet (i.e., the smaller rhythmic units consisting of one or more syllables that make up words): If metrical stress is trochaic, the prominent initial syllable of the (un-accented) foot is typically marked by increased intensity, whereas if metrical stress is iambic, the

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call