Abstract

In recent years, human emotion recognition has received great attention since it plays an essential role in human-computer interactions. Traditional methods focused on electroencephalogram(EEG) analysis in either time or frequency domain are unsuitable since EEG signals are nonlinear. This paper proposes subject-independent human emotion recognition from multi-channel EEG signals. The proposed method first obtains the time-frequency content of each channel using the modified Stockwell transform and then extracts the deep features from each time-frequency content by a deep convolutional neural network (CNN). Since there is a huge number of deep features, semi-supervised dimension reduction (SSDR) is utilized to reduce them, and reduced features of all channels are fused to construct the final feature vector. Several CNNs and classifiers are examined respectively for deep feature extraction and classification. The classification considering two-class and four-class scenarios on the DEAP dataset and three-class scenario on the SEED dataset show that the Inception-V3 CNN and support vector machine (SVM) classifier yields the highest accuracy. We present the extensive simulation to present the efficiency of the proposed method. Also, performance comparison with current methods based on time-frequency analysis demonstrates that the proposed method outperforms the others.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call