Abstract
In order to improve the single-mode emotion recognition rate, the bimodal fusion method based on speech and facial expression was proposed. Here emotion recognition rate can be defined as ratio of number of images properly recognized to the number of input images. Single mode emotion recognition term can be used either for emotion recognition through speech or through facial expression. To increase the rate we combine these two methods by using bimodal fusion. To do the emotion detection through facial expression we use adaptive sub layer compensation (ASLC) based facial edge detection method and for emotion detection through speech we use well known SVM. Then bimodal emotion detection is obtained by using probability analysis.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have