Abstract

Several cues are used to convey musical emotion, the two primary being musical mode and musical tempo. Specifically, major and minor modes tend to be associated with positive and negative valence, respectively, and songs at fast tempi have been associated with more positive valence compared to songs at slow tempi (Balkwill and Thompson, 1999; Webster and Weir, 2005). In Experiment I, we examined the relative weighting of musical tempo and musical mode among adult cochlear implant (CI) users combining electric and contralateral acoustic stimulation, or “bimodal” hearing. Our primary hypothesis was that bimodal listeners would utilize both tempo and mode cues in their musical emotion judgments in a manner similar to normal-hearing listeners. Our secondary hypothesis was that low-frequency (LF) spectral resolution in the non-implanted ear, as quantified via psychophysical tuning curves (PTCs) at 262 and 440 Hz, would be significantly correlated with degree of bimodal benefit for musical emotion perception. In Experiment II, we investigated across-channel spectral resolution using a spectral modulation detection (SMD) task and neural representation of temporal fine structure via the frequency following response (FFR) for a 170-ms /da/ stimulus. Results indicate that CI-alone performance was driven almost exclusively by tempo cues, whereas bimodal listening demonstrated use of both tempo and mode. Additionally, bimodal benefit for musical emotion perception may be correlated with spectral resolution in the non-implanted ear via SMD, as well as neural representation of F0 amplitude via FFR – though further study with a larger sample size is warranted. Thus, contralateral acoustic hearing can offer significant benefit for musical emotion perception, and the degree of benefit may be dependent upon spectral resolution of the non-implanted ear.

Highlights

  • Cochlear implant (CI) technology has improved significantly over the past 30 years, enabling cochlear implant (CI) users to achieve high levels of speech understanding in quiet listening environments (Gifford et al, 2018; Gifford and Dorman, 2018; Sladen et al, 2018); processing of more complex inputs remains a significant challenge for most CI users (e.g., Hsiao and Gfeller, 2012).At present, most modern CI processing use an envelopebased strategy in which a fixed pulse rate is amplitude modulated by the envelope of the signal

  • Bimodal listeners ranged in age from 24 to 79 years, and NH controls ranged in age from 22 to 71 years

  • Bimodal benefit was defined as the difference between scores in the bimodal condition and scores in the CI-alone condition

Read more

Summary

Introduction

Most modern CI processing use an envelopebased strategy in which a fixed pulse rate is amplitude modulated by the envelope of the signal. During this process, the temporal fine structure of the input is discarded. The lack of spectro-temporal detail provided by most CI processing strategies prevents complex signals from being transmitted with accuracy, especially those requiring precise coding of pitch information, such as musical melodies, lexical tone, and vocal emotion (Chatterjee and Peng, 2008; Hsiao and Gfeller, 2012; Luo et al, 2007; Jiam et al, 2017). Music and emotion perception are often significantly poorer in CI users than in normalhearing listeners

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call