Abstract

A few emotion recognition frameworks have been developed by various scientists for recognition of human feelings in talked expressions. This paper depicts recognition of emotions of speech and mood of music in light of the past researches. Moreover, distinctive techniques for feature extraction and diverse classifiers for the emotion recognition are also explored in this paper. The database for the recognition of speech emotions and music mood recognition framework is the speech and music samples, and the elements derived from these music and speech samples are the linear prediction cepstral coefficient (LPCC), vitality, pitch and Mel-frequency cepstral coefficient (MFCC). For the extraction of vector features, various wavelet distinctive structures can be used. The classifiers are utilized to separate feelings, for example, outrage, joy, pity, astound, fear, unbiased state and so forth. Extracted features are one of the base parameters for analysing the classifier’s performance. Results obtained from execution and confinements of speech emotion and music mood recognition framework insight of various techniques are discussed here. Preprocessing, feature extraction and recognition are the three basic steps for most of the models used for speech recognition. If there is an occurrence of speech emotion and music mood recognition system, the researchers use one of the three distinctive methodologies specifically knowledge-based method, acoustic phonetic method and pattern recognition method. Various techniques like principal component analysis (PCA) and MFFCs for the recognition of emotions of speech are also described here. Various parameters like entropy, zero crossing rate, spectral centroid, spectral roll-off and so forth are also discussed for the feature extraction and recognition of emotion and mood for speech and music, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call