Abstract

Numerous deep learning models have been introduced for EEG-based Emotion recognition tasks. Nevertheless, the majority of these models are fully supervised, demanding substantial amounts of labeled EEG signals. The labeling process of EEG signals is both time-intensive and costly, involving numerous trials and meticulous analysis by experts. Recently, some advanced semi-supervised algorithms that can achieve a competitive performance with fully-supervised methods by using only a small set of labeled data have been presented. However, these algorithms are primarily developed for the image data type, and naïve adaptation of them for EEG applications results in unsatisfactory performance. To address this issue, we present a robust semi-supervised EEG-based method that exploits the best techniques from advanced semi-supervised algorithms in the computer vision domain enriched with novel regularization terms for unlabeled signals. The proposed regularization terms improve both the discriminability and diversity of the model’s predictions and effectively leverage prior knowledge about the class distributions, thereby achieving a superior performance compared to the distribution alignment techniques in state-of-the-art methods. We evaluate our method on the DEAP dataset for cross-subject valence/arousal emotion recognition tasks, and on the SEED in a cross-session setting. The results indicate that the proposed method consistently surpasses the peer methods at different numbers of labeled data by a large margin.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.