Abstract

Studies on emotion recognition from prosody have largely focused on the role and effectiveness of isolated acoustic parameters and less is known about how information from these cues is perceived and combined to infer emotional meaning. To better understand how acoustic cues influence recognition of discrete emotions from voice, this study investigated how listeners perceptually combine information from two critical acoustic cues, pitch and speech rate, to identify emotions. For all the utterances, pitch and speech rate measures of the whole utterance were independently manipulated by factors of 1.25 (+25%) and 0.75 (−25%). To examine the influence of one cue with reference to the other cue the three manipulations of pitch (+25%, 0%, and −25%) were crossed with the three manipulations of speech rate (+25%, 0%, and −25%). Pseudoutterances spoken in five emotional tones (happy, sad, angry, fear, and disgust) and neutral that have undergone acoustic cue manipulations were presented to 15 male and 15 female participants for an emotion identification task. Results indicated that both pitch and speech rate are important acoustic parameters to identify emotions and more critically, it is the relative weight of each cue which seems to contribute significantly for categorizing happy, sad, fear, and neutral.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.