Abstract

This research article proposes two deep ensemble models, namely the Deep Cascaded Ensemble (DCE) and Deep Parallel Ensemble (DPE) for automatic speech emotion classification. Classification of emotions into their respective classes has so long relied on machine learning and deep learning networks. The proposed models are a blend of different machine learning and deep learning models in a cascaded/parallel architecture. The proposed models have exhibited a considerable reduction in the consumption of memory in a Google Colab environment. Furthermore, the issues of tuning numerous hyper-parameters and the huge data demand of the deep learning algorithms are overcome by the proposed deep ensemble models. The proposed DCE and DPE have yielded optimal classification accuracy with reduced consumption of memory over less data. Experimentally, the proposed deep cascaded ensemble has superior performance compared to the deep parallel ensemble of the same combination of deep learning and machine learning networks as well. The proposed models and the baseline models were evaluated in terms of possible performance metrics including Cohen's kappa coefficient, accuracy of classification accuracy, space and time complexity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call