Abstract

Feature extraction is the most important step in pattern recognition systems, and researchers have extensively focused on this field. This work aims to design and implement a novel feature extraction method that can extract features to recognize different emotions. Through this work, a unimodal speech, real-time, gender and speaker independent speech emotion recognition (SER) framework has been designed and implemented using the newly proposed extracted statistical features. This work’s contribution to feature extraction is the approach followed in extracting the statistical feature that used many degrees of the standard deviation (SD) on either side of the mean rather than 2 SDs on either side of the mean, as all researchers did in the past. In this work, the degrees of deviation on either side of the mean to study the feature distribution variance around the mean are (0.25, 0.5, 0.75, 1, 1.25, 1.5, 1.75, 2, 2.25, 2.5, 2.75, 3, 3.5 and 4). The data sets used in this work were the Ryerson Audio-Visual Database of Emotional Speech and Song dataset (RAVDESS) with eight emotions, the Berlin dataset (Emo-DB) with seven emotions and the Surrey Audio-Visual Expressed Emotion dataset (SAVEE) with seven emotions. Compared to the state-of-the-art unimodal SER approaches, the classification accuracy achieved in this work was near perfect at 86.1%, 96.3% and 91.7% for the RAVDESS, Emo-DB and SAVEE datasets, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.