Abstract

Objective. Typically, a brain–computer interface (BCI) is calibrated using user- and session-specific data because of the individual idiosyncrasies and the non-stationary signal properties of the electroencephalogram (EEG). Therefore, it is normal for BCIs to undergo a time-consuming passive training stage that prevents users from directly operating them. In this study, we systematically reduce the training data set in a stepwise fashion, to ultimately arrive at a calibration-free method for a code-modulated visually evoked potential (cVEP)-based BCI to fully eliminate the tedious training stage. Approach. In an extensive offline analysis, we compare our sophisticated encoding model with a traditional event-related potential (ERP) technique. We calibrate the encoding model in a standard way, with data limited to a single class while generalizing to all others and without any data. In addition, we investigate the feasibility of the zero-training cVEP BCI in an online setting. Main results. By adopting the encoding model, the training data can be reduced substantially, while maintaining both the classification performance as well as the explained variance of the ERP method. Moreover, with data from only one class or even no data at all, it still shows excellent performance. In addition, the zero-training cVEP BCI achieved high communication rates in an online spelling task, proving its feasibility for practical use. Significance. To date, this is the fastest zero-training cVEP BCI in the field, allowing high communication speeds without calibration while using only a few non-invasive water-based EEG electrodes. This allows us to skip the training stage altogether and spend all the valuable time on direct operation. This minimizes the session time and opens up new exciting directions for practical plug-and-play BCI. Fundamentally, these results validate that the adopted neural encoding model compresses data into event responses without the loss of explanatory power compared to using full ERPs as a template.

Highlights

  • A brain–computer interface (BCI) enables the use of a non-muscle channel to communicate with the external world by extracting intentions from measured brain activity and by converting these to a computer output [1]

  • We focus on code-modulated visually evoked potential (cVEP) BCI, which uses optimized pseudo-random sequences to encode stimuli

  • Encoding model reduces training time up to none at all Using extensive offline analyses on a large data set of 30 participants, we showed that the traditional event-related potential (ERP) training (e-train) can be made much more efficient by employing an encoding model that predicts EEG from the stimulus sequence (n-train, 1-train, 0-train)

Read more

Summary

April 2021

Keywords: brain–computer interface (BCI), electroencephalography (EEG), code-modulated visual evoked potentials (cVEPs), reconvolution, zero training, spread spectrum communication Original Content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence.

Introduction
Methods
Classification
Σobs mold mnew
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.