Speech emotion recognition (SER) is an emerging technology that utilizes speech sounds to identify a speaker’s emotional state. Computational intelligence is receiving increasing attention from academics, health, and social media applications. This research was conducted to identify emotional states in verbal communication. We applied a publicly available dataset called RAVDEES. The data augmentation process involved adding noise, applying time stretching, shifting, and pitch, and extracting the features zero cross rate (ZCR), chroma shift, Mel-Frequency Cepstral Coefficients (MFCC), and a spectrogram. In addition, we used many pretrained deep learning models, such as VGG16, ResNet50, Xception, InceptionV3, and DenseNet121. Out of all of the deep learning models, Xception yielded superior outcomes. Furthermore, we improved performance by changing the Xception model to include hyperparameters and additional layers. We used a variety of performance evaluation parameters to test the proposed model. These included F1-score, accuracy, misclassification rate (MCR), precision, sensitivity, specificity, negative predictive value, false negative rate, false positive rate, false discovery rate, false omission rate, and false discovery rate. The model that we suggested demonstrated an overall accuracy of 98%, with an MCR of 2%. Additionally, it attained precision, sensitivity, and specificity values of 91.99%, 91.78%, and 98.68%, respectively. Additional models attained an F1-score of 91.83%. Our suggested model demonstrated superiority compared to other cutting-edge techniques
Read full abstract