Abstract

Recent analysis on speech emotion recognition (SER) has made considerable advances with the use of MFCC’s spectrogram features and the implementation of neural network approaches such as convolutional neural networks (CNNs). The fundamental issue of CNNs is that the spatial information is not recorded in spectrograms. Capsule networks (CapsNet) have gained gratitude as alternatives to CNNs with their larger capacities for hierarchical representation. However, the concealed issue of CapsNet is the compression method that is employed in CNNs cannot be directly utilized in CapsNet. To address these issues, this research introduces a text-independent and speaker-independent SER novel architecture, where a dual-channel long short-term memory compressed-CapsNet (DC-LSTM COMP-CapsNet) algorithm is proposed based on the structural features of CapsNet. Our proposed novel classifier can ensure the energy efficiency of the model and adequate compression method in speech emotion recognition, which is not delivered through the original structure of a CapsNet. Moreover, the grid search (GS) approach is used to attain optimal solutions. Results witnessed an improved performance and reduction in the training and testing running time. The speech datasets used to evaluate our algorithm are: Arabic Emirati-accented corpus, English “speech under simulated and actual stress (SUSAS)” corpus, English Ryerson audio-visual database of emotional speech and song (RAVDESS) corpus, and crowd-sourced emotional multimodal actors dataset (CREMA-D). This work reveals that the optimum feature extraction method compared to other known methods is MFCCs delta-delta. Using the four datasets and the MFCCs delta-delta, DC-LSTM COMP-CapsNet surpasses all the state-of-the-art systems, classical classifiers, CNN, and the original CapsNet. Using the Arabic Emirati-accented corpus, our results demonstrate that the proposed work yields average emotion recognition accuracy of 89.3% compared to 84.7%, 82.2%, 69.8%, 69.2%, 53.8%, 42.6%, and 31.9% based on CapsNet, CNN, support vector machine (SVM), multi-layer perceptron (MLP), k-nearest neighbor (KNN), radial basis function (RBF), and naïve Bayes (NB), respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call