Abstract

Acoustic cues such as pitch height and timing are effective at communicating emotion in both music and speech. Numerous experiments altering musical passages have shown that higher and faster melodies generally sound “happier” than lower and slower melodies, findings consistent with corpus analyses of emotional speech. However, equivalent corpus analyses of complex time-varying cues in music are less common, due in part to the challenges of assembling an appropriate corpus. Here, we describe a novel, score-based exploration of the use of pitch height and timing in a set of “balanced” major and minor key compositions. Our analysis included all 24 Preludes and 24 Fugues from Bach’s Well-Tempered Clavier (book 1), as well as all 24 of Chopin’s Preludes for piano. These three sets are balanced with respect to both modality (major/minor) and key chroma (“A,” “B,” “C,” etc.). Consistent with predictions derived from speech, we found major-key (nominally “happy”) pieces to be two semitones higher in pitch height and 29% faster than minor-key (nominally “sad”) pieces. This demonstrates that our balanced corpus of major and minor key pieces uses low-level acoustic cues for emotion in a manner consistent with speech. A series of post hoc analyses illustrate interesting trade-offs, with sets featuring greater emphasis on timing distinctions between modalities exhibiting the least pitch distinction, and vice-versa. We discuss these findings in the broader context of speech-music research, as well as recent scholarship exploring the historical evolution of cue use in Western music.

Highlights

  • Language and music are highly developed and inter-related communicative systems employed and enjoyed by all known cultures

  • Repertoire Our corpus consisted of three 24 piece sets; with each set containing one piece in each of the 12 major and minor keys

  • The results are reported from the analysis of variance (ANOVA) table for the fitted model, which estimates the significance of omnibus F-tests for multi-level factors and, where applicable, their interactions with other predictors

Read more

Summary

Introduction

Language and music are highly developed and inter-related communicative systems employed and enjoyed by all known cultures. Similarities between the two are striking, suggesting that they may share a common precursor (Wallin et al, 2000; Mithen, 2005). Linguists have long recognized that speech contains “music-like” features such as the frequency sweeps commonly associated with musical melodies (Steele, 1775). Similarities between the domains range from high level organization, such as structure (Lerdahl and Jackendoff, 1985), to low level processing, such as neural markers of semantic meaning (Koelsch et al, 2004). Some of the similarities between music and speech are innate and related to shared underlying processes (Patel, 2003), others appear to be the result of enculturation and experience

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call