Abstract
In two studies investigating the recognition of emotion from vocal cues, each of four emotions (joy, sadness, anger, and fear) was posed by an actress speaking the same, semantically neutral sentence. Judgments of emotion expressed in these segments were compared with similar judgments of voice-synthesized (Moog synthesizer) samples (study 1) or with three different alterations of the full-speech mode (study 2). Correct identification of the posed emotion was high in the full-speech samples. Voice-synthesized samples seemed to capture some cues promoting emotion recognition, but correct identification did not approach that of other segments. Recognition of emotion decreased, but not as dramatically as expected in each of the three alterations of the original samples.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.