Abstract

Speech emotion recognition (SER) system is one of the most important building block in this age of technology, where, the human-computer interaction plays a very indispensable role. In this work, emotional speech samples are taken from two databases namely, Berlin emotional speech database (Emo-DB) and surrey audio-visual expressed emotion speech database (SAVEE). Three different cepstral features like mel-frequency cepstral coefficients (MFCC), human factor cepstral coefficients (HFCC) and gammatone frequency cepstral coefficients (GFCC) are extracted from the emotional speech samples. These features are used for training, validating and testing the classifier. The extracted features represent the emotional content present in the speech signal. Two classifiers namely, the feedforward backpropagation artificial neural network (FF-BP-ANN) and support vector machine (SVM) are used for developing SERs. These classifiers are trained to classify the input speech signals into any one emotion among the distinct emotional classes corresponding to anger, bordem, disgust, fear, happiness, neutral, sadness and surprise. The results corresponding to the usage of three different cepstral features in accurately recognizing the emotions from speech utterances of two databases are presented. Finally, the performance comparisons of SER systems are made with respect to features, classifiers and from existing literature.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.