Abstract

Using multimodal signals to solve the problem of emotion recognition is one of the emerging trends in affective computing. Several studies have utilized state of the art deep learning methods and combined physiological signals, such as the electrocardiogram (EEG), electroencephalogram (ECG), skin temperature, along with facial expressions, voice, posture to name a few, in order to classify emotions. Spiking neural networks (SNNs) represent the third generation of neural networks and employ biologically plausible models of neurons. SNNs have been shown to handle Spatio-temporal data, which is essentially the nature of the data encountered in emotion recognition problem, in an efficient manner. In this work, for the first time, we propose the application of SNNs in order to solve the emotion recognition problem with the multimodal dataset. Specifically, we use the NeuCube framework, which employs an evolving SNN architecture to classify emotional valence and evaluate the performance of our approach on the MAHNOB-HCI dataset. The multimodal data used in our work consists of facial expressions along with physiological signals such as ECG, skin temperature, skin conductance, respiration signal, mouth length, and pupil size. We perform classification under the Leave-One-Subject-Out (LOSO) cross-validation mode. Our results show that the proposed approach achieves an accuracy of 73.15% for classifying binary valence when applying feature-level fusion, which is comparable to other deep learning methods. We achieve this accuracy even without using EEG, which other deep learning methods have relied on to achieve this level of accuracy. In conclusion, we have demonstrated that the SNN can be successfully used for solving the emotion recognition problem with multimodal data and also provide directions for future research utilizing SNN for Affective computing. In addition to the good accuracy, the SNN recognition system is requires incrementally trainable on new data in an adaptive way. It only one pass training, which makes it suitable for practical and on-line applications. These features are not manifested in other methods for this problem.

Highlights

  • The central aim of affective computing is to enable seamless communication between humans and computers by developing systems that can detect and respond to the various affect states of the humans [1]

  • We developed an approach based on NeuCube [18], which is an evolving SNN (eSNN) framework, in order to classify emotional valence using multimodal dataset that included video and physiological signals

  • In addition to the good accuracy of classification, the Spiking neural networks (SNNs) system can be incrementally trained on new data and new features in an adaptive way, allowing for the system to be used in an on-line applications [96]

Read more

Summary

Introduction

The central aim of affective computing is to enable seamless communication between humans and computers by developing systems that can detect and respond to the various affect states of the humans [1]. Modelling affect can be classified into three categories: categorical, dimensional, and components. Dimensional models represent emotion as a point in multidimensional space, where the dimensions include valence, activation, and control, allowing for the description of more complex and subtle emotions. Such multidimensional space can pose a significant challenge to automatic emotion recognition system and, researchers have mostly used the simplified two-dimensional model of arousal and valence proposed in [4], where arousal ranges the intensity of emotion from calm to excited, and valence ranges from unpleasant to pleasant [5]. The most popular component model proposed by Plutchik [6]

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call