Abstract

Machine learning approaches for human emotion recognition have recently demonstrated high performance. However, only/mostly for subject-dependent approaches, in a variety of applications like advanced driver assisted systems, smart homes and medical environments. Therefore, now the focus is shifted more towards subject-independent approaches, which are more universal and where the emotion recognition system is trained using a specific group of subjects and then tested on totally new persons and thereby possibly while using other sensors of same physiological signals in order to recognize their emotions. In this paper, we explore a novel robust subject-independent human emotion recognition system, which consists of two major models. The first one is an automatic feature calibration model and the second one is a classification model based on Cellular Neural Networks (CNN). The proposed system produces state-of-the-art results with an accuracy rate between and when using the same elicitation materials and physiological sensors brands for both training and testing and an accuracy rate of when the elicitation materials and physiological sensors brands used in training are different from those used in training. Here, the following physiological signals are involved: ECG (Electrocardiogram), EDA (Electrodermal activity) and ST (Skin-Temperature).

Highlights

  • Emotion is a complex phenomenon which involves various physical structures

  • We provide a description of the overall research methodology, a comprehensive presentation of the physiological reference dataset MAHNOB used for training, and the presentation of our lab dataset used for final testing and validation of the proposed emotion recognition system developed in this study

  • The classifier is applied on MAHNOB dataset first, just to show how the proposed classifier overcomes the standard classifiers as k-nearest neighbors (KNN), artificial neural networks (ANNs), Naive Bayes classifier (NB) and SVM

Read more

Summary

Introduction

Emotion is a complex phenomenon which involves various physical structures. It plays an important role in decision-making, behavior and other social communication. Human emotions can be extracted from measured appropriate physiological sensor date. Most researchers in the field of emotion recognition have focused on the analysis of data originating from a single sensor, such as audio (speech) or video (facial expression) data [6,7]. The main target of using the fusion of multiple sensors is that humans use a combination of different modalities in our body to express emotional states during human interaction [8]. Subjective experience: several works have categorized emotions into different states, whereby all humans regardless of culture and race can experience them. Emotion expressions include human audiovisual activities such as gesture, posture, voice intonation, breathing noise, etc

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call