Abstract

Emotion is a subjective, conscious experience when people face different kinds of stimuli. In this paper, we adopt Deep Canonical Correlation Analysis (DCCA) for high-level coordinated representation to make feature extraction from EEG and eye movement data. Parameters of the two views’ nonlinear transformations are learned jointly to maximize the correlation. We propose a multi-view emotion recognition framework and evaluate its effectiveness on three real world datasets. We found that DCCA efficiently learned representations with high correlation, which contributed to higher emotion classification accuracy. Our experiment results indicate that DCCA model is superior to the state-of-the-art methods with mean accuracies of 94.58% on SEED dataset, 87.45% on SEED IV dataset, and 88.51% and 84.98% for four classification and two dichotomies on DEAP dataset, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call