Abstract

Multimodal emotion recognition is an emerging field within affective computing that, by simultaneously using different physiological signals, looks for evaluating an emotional state. Physiological signals such as electroencephalogram (EEG), temperature and electrocardiogram (ECG), to name a few, have been used to assess emotions like happiness, sadness or anger, or to assess levels of arousal or valence. Research efforts in this field so far have mainly focused on building pattern recognition systems with an emphasis on feature extraction and classifier design. A different set of features is extracted over each type of physiological signal, and then all these sets of features are combined, and used to feed a particular classifier. An important stage of a pattern recognition system that has received less attention within this literature is the feature selection stage. Feature selection is particularly useful for uncovering the discriminant abilities of particular physiological signals. The main objective of this paper is to study the discriminant power of different features associated to several physiological signals used for multimodal emotion recognition. To this end, we apply recursive feature elimination and margin-maximizing feature elimination over two well known multimodal databases, namely, DEAP and MAHNOB-HCI. Results show that EEG-related features show the highest discrimination ability. For the arousal index, EEG features are accompanied by Galvanic skin response features in achieving the highest discrimination power, whereas for the valence index, EEG features are accompanied by the heart rate features in achieving the highest discrimination power.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call