Abstract
Estimation of human emotions using a computer has been difficult since when human engaged in a conversational secession. In this research work, a proposed hybrid system through facial expressions and speech is used to estimate basic emotions (angry, sad, happy, boredom, disgust and surprise) of a person when he is engaged in a conversational secession. Relative Bin Frequency Coefficients and Relative Sub-Image Based features are used for acoustic and visual data respectively. Support Vector Machine with radial basis kernel is used for classification. This research work revealed that the proposed feature extraction through speech and facial expression is the most prominent aspect affecting the emotion detection system accompanied by proposed fusion technique. Although, some aspects considered affecting the emotion detection system, this affect is relatively minor. It was observed that the performance of the bimodal emotion detection system is low than the unimodal emotion detection system through deliberate facial expressions. In order to deal with the issue, a suitable database is used. The results indicated that the proposed emotion detection system showed better performance with respect to basic emotional classes than the rest.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.