Abstract

Deep learning (DL)-based methods have been successfully employed as asynchronous classification algorithms in the steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) system. However, these methods often suffer from the limited amount of electroencephalography (EEG) data, leading to overfitting. This study proposes an effective data augmentation approach called EEG mask encoding (EEG-ME) to mitigate overfitting. EEG-ME forces models to learn more robust features by masking partial EEG data, leading to enhanced generalization capabilities of models. Three different network architectures, including an architecture integrating convolutional neural networks (CNN) with Transformer (CNN-Former), time domain-based CNN (tCNN), and a lightweight architecture (EEGNet) are utilized to validate the effectiveness of EEG-ME on publicly available benchmark and BETA datasets. The results demonstrate that EEG-ME significantly enhances the average classification accuracy of various DL-based methods with different data lengths of time windows on two public datasets. Specifically, CNN-Former, tCNN, and EEGNet achieve respective improvements of 3.18%, 1.42%, and 3.06% on the benchmark dataset as well as 11.09%, 3.12%, and 2.81% on the BETA dataset, with the 1-second time window as an example. The enhanced performance of SSVEP classification with EEG-ME promotes the implementation of the asynchronous SSVEP-BCI system, leading to improved robustness and flexibility in human-machine interaction.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.