Abstract

Current research on music processing and syntax or semantics in language suggests that music and language share partially overlapping neural resources. Pitch also constitutes a common denominator, forming melody in music and prosody in language. Further, pitch perception is modulated by musical training. The present study investigated how music and language interact on pitch dimension and whether musical training plays a role in this interaction. For this purpose, we used melodies ending on an expected or unexpected note (melodic expectancy being estimated by a computational model) paired with prosodic utterances which were either expected (statements with falling pitch) or relatively unexpected (questions with rising pitch). Participants' (22 musicians, 20 nonmusicians) ERPs and behavioural responses in a statement/question discrimination task were recorded. Participants were faster for simultaneous expectancy violations in the melodic and linguistic stimuli. Further, musicians performed better than nonmusicians, which may be related to their increased pitch tracking ability. At the neural level, prosodic violations elicited a front-central positive ERP around 150ms after the onset of the last word/note, while musicians presented reduced P600 in response to strong incongruities (questions on low-probability notes). Critically, musicians' P800 amplitudes were proportional to their level of musical training, suggesting that expertise might shape the pitch processing of language. The beneficial aspect of expertise could be attributed to its strengthening effect of general executive functions. These findings offer novel contributions to our understanding of shared higher-order mechanisms between music and language processing on pitch dimension, and further demonstrate a potential modulation by musical expertise.

Highlights

  • Music and language are two of the most characteristic human attributes, and there has been a surge of recent research interest in investigating the relationship between their cognitive and neural processing (e.g., Carrus et al, 2011; Koelsch and Jentschke, 2010; Maess et al, 2001; Patel et al, 1998a, 1998b)

  • There was a significant interaction between prosody and note-probability (F(1,38) 1⁄46.53, p1⁄4 .015, η21⁄4 .15), and this was primarily due to the difference between questions (QH–QL: t(39) 1⁄4 2.93, p 1⁄4.006), but not between statements (p 4.05)

  • Reaction times were shorter when statements were paired with a high-probability note, compared to when statements were paired with a lowprobability note

Read more

Summary

Introduction

Music and language are two of the most characteristic human attributes, and there has been a surge of recent research interest in investigating the relationship between their cognitive and neural processing (e.g., Carrus et al, 2011; Koelsch and Jentschke, 2010; Maess et al, 2001; Patel et al, 1998a, 1998b). Music and language use different elements (i.e. tones and words, respectively) to form complex hierarchical structures (i.e. harmony and sentences, respectively), governed by a set of rules which determines their syntax (Patel, 1998, 2003, 2012; Slevc et al, 2009). Musical elements can be played concurrently to form harmony, but this is not the case for language. In this context, Patel (1998) hypothesised that what is common in music and language is that experienced listeners organise their elements in an hierarchical fashion based on learned rules (McMullen and Saffran, 2004; Slevc et al, 2009). Expectations can be disrupted in language, resulting in unexpected or incorrect sentences (Gibson, 2006)

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call