Abstract

Due to the weak and non-stationary properties, Electroencephalogram (EEG) data presents significant individual differences. To align data distributions of different subjects, transfer learning showed promising performance in cross-subject EEG emotion recognition. However, most of the existing models sequentially learned the domain-invariant features and estimated the target domain label information. Such a two-stage strategy breaks the inner connections of both processes, inevitably causing the sub-optimality. In this paper, we propose a Joint EEG feature Transfer and Semi-supervised cross-subject emotion Recognition (JTSR) model in which the shared subspace projection matrix and target label are jointly optimized towards the optimum. Extensive experiments are conducted on SEED-IV and SEED, and the results show that 1) the emotion recognition performance is significantly enhanced by the joint learning mode, and 2) the spatial-frequency activation patterns of critical EEG frequency bands and brain regions in cross-subject emotion expression are quantitatively identified by analyzing the learned shared subspace.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call