Abstract

Automatic speech emotion recognition (ASER) from source speech signals is quite a challenging task since the recognition accuracy is highly dependent on extracted features of speech that are utilized for the classification of speech emotion. In addition, pre-processing and classification phases also play a key role in improving the accuracy of ASER system. Therefore, this paper proposes a deep learning convolutional neural network (DLCNN)-based ASER model, hereafter denoted with ASERNet. In addition, the speech denoising is employed with spectral subtraction (SS) and the extraction of deep features is done using integration of linear predictive coding (LPC) with Mel-frequency Cepstrum coefficients (MFCCs). Finally, DLCNN is employed to classify the emotion of speech from extracted deep features using LPC-MFCC. The simulation results demonstrate the superior performance of the proposed ASERNet model in terms of quality metrics such as accuracy, precision, recall, and F1-score, respectively, compared to state-of-the-art ASER approaches.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.