Abstract

Identification of the acoustic cues used to perceive emotions in speech is important for a number of applications including rehabilitation, natural speech modeling, and speech synthesis. In a recent experiment, Patel, Shrivastav, Harnsberger, and Shrivastav (2007) found that a four‐dimensional solution accounted for 90% of the variance in similarity judgments for 19 emotional categories in nonsense speech. This solution was determined for averaged judgments across twelve listeners. The present study investigated individual differences in the perception of emotions for speech devoid of semantic information but rich in suprasegmental cues. Six male and six female listeners participated in a same‐different discrimination test of a set of nonsense sentences produced in nineteen emotional contexts by two actors. Nonsense sentences were used in order to avoid any biases caused by semantics. The perceptual distance between each stimulus pair was computed in terms of d' values for each listener. These distances were submitted to a multidimensional scaling analysis using the INDSCAL algorithm. The INDSCAL analysis reports the best fitting solution for all listeners as a group, along with the weights assigned to each dimension by individual listeners. The results of this analysis will be presented.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call