Abstract

For several years, emotion detection from speech signals has been a research topic in human-machine interface applications. To discern emotions from speech signals, a variety of devices have been developed. Theoretical definitions, categorizations, and modalities of emotion expression are all discussed. To conduct this research, a SER framework based on various classifiers and feature extraction methods was developed. The mel-frequency cepstrum coefficients (MFCC) and modulation spectral (MS) characteristics of speech signals are analysed and fed into various classifiers for training. Using feature selection, this method is used to find the most important function subset (FS). The features extracted from emotional speech samples that make up the database for the speech emotion recognition system include power, pitch, linear prediction cepstrum coefficient (LPCC), and Mel frequency cepstrum coefficient (MFCC). The effectiveness of classification is determined by the extracted features. Seven emotions are classified using a recurrent neural network (RNN) classifier. Their results are then compared to techniques such as multivariate linear regression (MLR) and support vector machines that are used in the field of emotion detection for spoken audio signals (SVM).

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.