Abstract

A brain-computer-interface (BCI) allows the user to control a device or software with brain activity. Many BCIs rely on visual stimuli with constant stimulation cycles that elicit steady-state visual evoked potentials (SSVEP) in the electroencephalogram (EEG). This EEG response can be generated with a LED or a computer screen flashing at a constant frequency, and similar EEG activity can be elicited with pseudo-random stimulation sequences on a screen (code-based BCI). Using electrocorticography (ECoG) instead of EEG promises higher spatial and temporal resolution and leads to more dominant evoked potentials due to visual stimulation. This work is focused on BCIs based on visual evoked potentials (VEP) and its capability as a continuous control interface for augmentation of video applications. One 35 year old female subject with implanted subdural grids participated in the study. The task was to select one out of four visual targets, while each was flickering with a code sequence. After a calibration run including 200 code sequences, a linear classifier was used during an evaluation run to identify the selected visual target based on the generated code-based VEPs over 20 trials. Multiple ECoG buffer lengths were tested and the subject reached a mean online classification accuracy of 99.21% for a window length of 3.15 s. Finally, the subject performed an unsupervised free run in combination with visual feedback of the current selection. Additionally, an algorithm was implemented that allowed to suppress false positive selections and this allowed the subject to start and stop the BCI at any time. The code-based BCI system attained very high online accuracy, which makes this approach very promising for control applications where a continuous control signal is needed.

Highlights

  • People have long sought to extract users’ intentions from brain signals to give impaired persons a communication channel or optimize interaction between users and their environments

  • The main goal of this study is to investigate an intracranial c-visual evoked potentials (VEP) BCI designed as a continuous control interface for augmentation of video applications

  • This work successfully showed that a continuous control signal can be extracted from ECoG data with code-based VEPs (c-VEP)

Read more

Summary

Introduction

People have long sought to extract users’ intentions from brain signals to give impaired persons a communication channel or optimize interaction between users and their environments. Such a brain-computer-interface (BCI) allows the user to control a device or software with brain activity (Wolpaw et al, 2002). The EEG has only a limited spatial resolution, as each channel is influenced by the activation of millions of neurons, and the signal is smeared and filtered during passage through the scalp. ECoG signals recorded from the brain’s surface are more robust against electromyographic (EMG) artifacts and provide a higher spatial and temporal resolution compared to EEG signals (Leuthardt et al, 2004). Several groups investigated the reliability of electrocorticographic signals for real-time applications, like 2D movement control based on motor imagery tasks (Schalk et al, 2008) or a P300 spelling device (Brunner et al, 2011)

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call