Abstract
Human Emotion Recognition (HER) system is used for automatic recognition of human emotional states and take necessary actions based on negative emotions. This process finds its applications in Driver assistant systems, Air - pilot cockpit systems, Call centers, etc. The HER system proposed in this paper recognizes, classifies and validates different human emotions from the acoustic signal using several classifiers over standard speech corpus. Though many of the Acoustic features carry emotional information, training the machine with all the features may overfit the model or reduce the accuracy. To overcome this problem optimized selection of fused features is to be performed. This paper discusses optimized methods for fusion and feature selection to identify human emotional states through speech signals. The feature fusion results in the computation convolution of the HER system due to high dimensional correlated speech feature set. To overcome this and enhance the recognition rate feature selection technique is used. To pre dict emotional categories, different classifiers are considered viz. Linear Discriminant Analysis (LDA), Support Vector Machine (SVM) and K-Nearest Neighbors (KNN) and the results are validate d on standard corpus viz. Emo-DB, SES, IEMOCAP, IITKGP-SESC, and IITKGP-SEHSC. SVM has outperformed over other classifiers and is best trained on the IITKGP-SEHSC corpus.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.