Abstract

A new corpus of emotional speech was created to conduct theory and evidence-based comparisons to the published literature on the effects of a speaker’s intended emotion on the acoustics of their speech. Fifty-five adults were recorded speaking scripts with happy, angry, sad, and non-emotional prosodies. Variations in acoustic parameters such as pitch, timing, and formant deviations were investigated. Based on Scherer’s (1986) theoretical predictions about differences between discrete emotions, and Juslin and Laukka’s (2003) empirically-derived meta-analytic conclusions, we measured the degree to which the emotional speech data reflected predicted differences in the acoustic parameters of these prosodies. First, in relation to non-emotional prosody, angry and happy prosody were each 75% consistent with theory and sad prosody was not (25%). Second, in relation to non-emotional prosody, angry, happy, and sad prosodies were consistent with the empirical evidence base, 70%, 90%, and 50%, respectively. A subjective study was conducted wherein 30 adults rated the speech samples. Overall, adults discriminated the intended emotion of the speaker with 92% accuracy. Multiple regression analyses indicated that less than 25% of the significant acoustic patterns for each prosody accounted for variance in perceived emotional intensity. [Work supported by an award to Pamela Cole (NIMH 104547).]

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.