Abstract

Music can evoke strong emotions. Research has suggested that pleasurable chills (shivering) and tears (weeping) are peak emotional responses to music. The present study examines whether computational acoustic and lyrical features can decode chills and tears. The experiment comprises 186 pieces of self-selected music to evoke emotional responses from 54 Japanese participants. Machine learning analysis with L2-norm-regularization regression revealed the decoding accuracy and specified well-defined features. In Study 1, time-series acoustic features significantly decoded emotional chills, tears, and the absence of chills or tears by using information within a few seconds before and after the onset of the three responses. The classification results showed three significant periods, indicating that complex anticipation-resolution mechanisms lead to chills and tears. Evoking chills was particularly associated with rhythm uncertainty, while evoking tears was related to harmony. Violating rhythm expectancy may have been a trigger for chills, while the harmonious overlapping of acoustic spectra may have played a role in evoking tears. In Study 2, acoustic and lyrical features from the entire piece decoded tears but not chill frequency. Mixed emotions stemming from happiness were associated with major chords, while lyric content related to sad farewells can contribute to the prediction of emotional tears, indicating that distinctive emotions in music may evoke a tear response. When considered in tandem with theoretical studies, the violation of rhythm may biologically boost both the pleasure- and fight-related physiological response of chills, whereas tears may be evolutionarily embedded in the social bonding effect of musical harmony and play a unique role in emotional regulation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call