Abstract

This paper investigates the detection of speech emotion using different sets of voice quality, prosodic and hybrid features. There are a total of five sets of emotion features experimented in this work which are two from voice quality features, one set from prosodic features and two hybrid features. The experimental data used in the work is from Berlin Emotional Database. Classification of emotion is done using Multi-Layer Perceptron, Neural Network. The results show that hybrid features gave better overall recognition rates compared to voice quality and prosodic features alone. The best overall recognition of hybrid features is 75.51% while for prosodic and voice quality features are 64.67% and 59.63% respectively. Nevertheless, the recognition performance of emotions are varies with the highest recognition rate is for anger with 88% while the lowest is disgust with only 52% using hybrid features.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.