Intracranial electroencephalography (iEEG) can record neuronal activities in different brain areas by placing strip or mesh electrodes on the surface of the brain. Benefiting from the high temporal and spatial resolution of iEEG signals, iEEG-based motor imagery has gained great attention, and made significant progress under the frameworks of convolutional neural networks (CNNs) and recurrent neural networks (RNNs). However, both CNN- and RNN-based methods have limitations in perceiving the long-range global dependencies of local features in time and spatial dimensions. To exploit the temporal and spatial relationship of iEEG signals, we propose a Swin-Based Temporal Cascade Channel Network (Swin-TCNet) for motor imagery classification tasks, which is composed of three modules: Temporal-Swin (TS), Channel-Swin (CS) and Classifier. The TS and CS modules respectively extract the global dependencies of iEEG signals in the time and channel dimensions, and the Classifier assigns the class labels to the inputs. In addition, we introduce a Temporal-Channel Swin (TC-Swin) block instead of the standard Swin block to effectively reduce the computational complexity. We validate our model on the public imagined handwriting movement dataset collected by Stanford University. The experimental results demonstrate that our model can raise the classification accuracy by 2%–9% compared with 6 state-of-the-art models.
Read full abstract