Abstract

Emotion recognition is an important research topic in the human-machine interaction field, and it can be applied to medicine, education, psychology, military, and other areas. Electroencephalogram (EEG) signals are mostly used among various indices of emotion recognition. High accuracy of emotion classifiers can be achieved by extracting the most relevant and discriminant features of emotion states. This study surveys EEG features that are extensively used in current emotion recognition studies by introducing EEG features from the following four viewpoints: time domain, frequency domain, time–frequency domain, and space domain. An SLDA algorithm is imported to three public EEG-emotion datasets (SEED, DREAMER, and CAS-THU) to evaluate feature capabilities that distinguish emotion valence. Existing problems and future investigations are also discussed in this paper.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call