Abstract

Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is currently studied independently in the two domains, limiting research on relations between predictive mechanisms in music and language. One limitation is a difference in how expectancy is quantified. In language, expectancy is typically measured using the cloze probability task, in which listeners are asked to complete a sentence fragment with the first word that comes to mind. In contrast, previous production-based studies of melodic expectancy have asked participants to sing continuations following only one to two notes. We have developed a melodic cloze probability task in which listeners are presented with the beginning of a novel tonal melody (5–9 notes) and are asked to sing the note they expect to come next. Half of the melodies had an underlying harmonic structure designed to constrain expectations for the next note, based on an implied authentic cadence (AC) within the melody. Each such ‘authentic cadence’ melody was matched to a ‘non-cadential’ (NC) melody matched in terms of length, rhythm and melodic contour, but differing in implied harmonic structure. Participants showed much greater consistency in the notes sung following AC vs. NC melodies on average. However, significant variation in degree of consistency was observed within both AC and NC melodies. Analysis of individual melodies suggests that pitch prediction in tonal melodies depends on the interplay of local factors just prior to the target note (e.g., local pitch interval patterns) and larger-scale structural relationships (e.g., melodic patterns and implied harmonic structure). We illustrate how the melodic cloze method can be used to test a computational model of melodic expectation. Future uses for the method include exploring the interplay of different factors shaping melodic expectation, and designing experiments that compare the cognitive mechanisms of prediction in music and language.

Highlights

  • Recent years have seen growing interest in cognitive and neural relations between music and language

  • One early demonstration of this overlap came from event-related potential (ERP) research, which revealed that a component known as the P600 is observed in response to syntactically challenging or anomalous events in both domains (Patel et al, 1998)

  • This “dual-system” model proposes that syntactic integration of incoming elements in language and music involves the interaction of shared “resource networks” and domainspecific “representation networks” (see Patel, 2013 for a detailed discussion, including relations between the shared syntactic integration resource hypothesis (SSIRH) and Hagoort’s (2005) “memory, unification, and control” model of language processing)

Read more

Summary

Introduction

Recent years have seen growing interest in cognitive and neural relations between music and language. The SSIRH posits a distinction between domain-specific representations in long-term memory (e.g., stored knowledge of words and their syntactic features, and of chords and their harmonic features), which can be separately damaged, and shared neural resources which act upon these representations as part of structural processing. This “dual-system” model proposes that syntactic integration of incoming elements in language and music involves the interaction (via long-distance neural connections) of shared “resource networks” and domainspecific “representation networks” (see Patel, 2013 for a detailed discussion, including relations between the SSIRH and Hagoort’s (2005) “memory, unification, and control” model of language processing)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call