Abstract

Recently, deep learning-based electroencephalogram (EEG) analysis and decoding have attracted widespread attention for monitoring the clinical condition of users and identifying their intention/emotion. Nevertheless, the existing methods generally model EEG signals with limited viewpoints or restricted concerns about the characteristics of the EEG signals, and thus represent complex spectro-/spatiotemporal patterns and suffer from high variability. In this work, we propose the novel EEG-oriented self-supervised learning methods and a novel deep architecture to learn rich representation, including information about the diverse spectral characteristics of neural oscillations, the spatial properties of electrode sensor distribution, and the temporal patterns of both the global and local viewpoints. Along with the proposed self-supervision strategies and deep architectures, we devise a feature normalization strategy to resolve the intra-/inter-subject variability problem. We demonstrate the validity of our proposed deep learning framework on the four publicly available datasets by conducting comparisons with the state of the art baselines. It is also noteworthy that we exploit the same network architecture for the four different EEG paradigms and outperform the comparison methods, thereby meeting the challenge of the task-dependent network architecture engineering in EEG-based applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call