Abstract

With an increase in the need for real-time systems for analysing speech emotion and sentiment analysis systems for emotions in the human-computer interface, the field of SER has turned into the most studied area. For this paper, we tried to find a better way to analyse emotion from speech signals by taking gender regardless of the context of speech. The audio data used for training, testing, and classification is a combination of various databases like (CREMA-D) which stands for Crowd Sourced Emotional Multimodal Actors Dataset. Another one is Berlin Database of Emotional Speech which is a short-form of (EMO-DB) which is in German language with average of 3 sec, (SAVEE) or the Surrey Audio-Visual Expressed Emotion Database, Toronto Emotional Speech Set (TESS), Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS). For this paper, we have used a total of four models, out of which two are ConvNet (CNN) and the other two are from multilayer perceptron (MLP). With calculated MFCCs and passed to gender classifier and then to the respective emotion class classifier for both MLP and CNN classifier. Eventually, we introduced the essential distinction in exactness detailed from MLP and CNN classifiers for recognising speech emotion. The acoustic features of time, frequency, and spectral have been taken into use. The so trained model classifies the gender of the speaker with one of the emotional states from the speech signal.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.