Abstract

Recently, emotion classification from electroencephalogram (EEG) data has attracted much attention. As EEG is an unsteady and rapidly changing voltage signal, the features extracted from EEG usually change dramatically, whereas emotion states change gradually. Most existing feature extraction approaches do not consider these differences between EEG and emotion. Microstate analysis could capture important spatio-temporal properties of EEG signals. At the same time, it could reduce the fast-changing EEG signals to a sequence of prototypical topographical maps. While microstate analysis has been widely used to study brain function, few studies have used this method to analyze how brain responds to emotional auditory stimuli. In this study, the authors proposed a novel feature extraction method based on EEG microstates for emotion recognition. Determining the optimal number of microstates automatically is a challenge for applying microstate analysis to emotion. This research proposed dual-threshold-based atomize and agglomerate hierarchical clustering (DTAAHC) to determine the optimal number of microstate classes automatically. By using the proposed method to model the temporal dynamics of auditory emotion process, we extracted microstate characteristics as novel temporospatial features to improve the performance of emotion recognition from EEG signals. We evaluated the proposed method on two datasets. For public music-evoked EEG Dataset for Emotion Analysis using Physiological signals, the microstate analysis identified 10 microstates which together explained around 86% of the data in global field power peaks. The accuracy of emotion recognition achieved 75.8% in valence and 77.1% in arousal using microstate sequence characteristics as features. Compared to previous studies, the proposed method outperformed the current feature sets. For the speech-evoked EEG dataset, the microstate analysis identified nine microstates which together explained around 85% of the data. The accuracy of emotion recognition achieved 74.2% in valence and 72.3% in arousal using microstate sequence characteristics as features. The experimental results indicated that microstate characteristics can effectively improve the performance of emotion recognition from EEG signals.

Highlights

  • To make a human–machine interaction more natural, emotion recognition should play an important role

  • This study proposes a dual threshold-based atomize and agglomerate hierarchical clustering (DTAAHC) which can determine the optimal number of microstate classes automatically

  • By using the proposed method to model the temporal dynamics of the auditory emotion process, we extract microstate characteristics as novel temporospatial features for improving the performance of emotion recognition from EEG signals

Read more

Summary

Introduction

To make a human–machine interaction more natural, emotion recognition should play an important role. Interest in emotion recognition from different modalities (e.g., face, speech, body posture, and physiological responses) has risen in the past decades. Physiological signals could measure the changes in physiological responses to emotional stimulus. They have advantages on eliminating social masking or factitious emotion expressions to obtain a better understanding of underlying emotions (Jang et al, 2015). Among the various types of physiological signals, an electroencephalogram (EEG) shows a direct measure of the electrical activity of the brain. It has been used in cognitive neuroscience to investigate the regulation and processing of emotion (Dennis and Solomon, 2010; Thiruchselvam et al, 2011). With the rapid development of dry EEG electrode techniques, EEG-based emotion recognition has received increasing applications in different fields such as affective brain–computer interaction (Atkinson and Campos, 2016; Chen et al, 2021), healthcare (Hossain and Muhammad, 2019), emotional companionship, and e-learning (Ali et al, 2016)

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call