Abstract

For several years, emotion detection from speech signals has been a research topic in human-machine interface applications. To discern emotions from speech signals, a variety of devices have been developed. Theoretical definitions, categorizations, and modalities of emotion expression are all discussed. To conduct this research, a SER framework based on various classifiers and feature extraction methods was developed. The mel-frequency cepstrum coefficients (MFCC) and modulation spectral (MS) characteristics of speech signals are analysed and fed into various classifiers for training. Using feature selection, this method is used to find the most important function subset (FS). The features extracted from emotional speech samples that make up the database for the speech emotion recognition system include power, pitch, linear prediction cepstrum coefficient (LPCC), and Mel frequency cepstrum coefficient (MFCC). The effectiveness of classification is determined by the extracted features. Seven emotions are classified using a recurrent neural network (RNN) classifier. Their results are then compared to techniques such as multivariate linear regression (MLR) and support vector machines that are used in the field of emotion detection for spoken audio signals (SVM).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call