Abstract

Brain–computer interfaces (BCIs) based on code-modulated visual evoked potentials (c-VEPs) typically utilize a synchronous approach to identify targets (i.e., after preset time periods the system produces command outputs). Hence, users have only a limited amount of time to fixate a desired target. This hinders the usage of more complex interfaces, as these require the BCI to distinguish between intentional and unintentional fixations. In this article, we investigate a dynamic sliding window mechanism as well as the implementation of software-based stimulus synchronization to enable the threshold-based target identification for the c-VEP paradigm. To further improve the usability of the system, an ensemble-based classification strategy was investigated. In addition, a software-based approach for stimulus on-set determination is proposed, which allows for an easier setup of the system, as it reduces additional hardware dependencies. The methods were tested with an eight-target spelling application utilizing an n-gram word prediction model. The performance of eighteen participants without disabilities was tested; all participants completed word- and sentence spelling tasks using the c-VEP BCI with a mean information transfer rate (ITR) of 75.7 and 57.8 bpm, respectively.

Highlights

  • A brain–computer interface (BCI) records, analyzes and interprets brain activity of the user and can be used for communication with the external environment, without involving muscle activity [1].BCIs can be utilized as communication device for severely impaired people; e.g., people suffering from spinal cord injuries, brain stem strokes, amyotrophic lateral sclerosis (ALS), or muscular dystrophies [2].If used as a spelling device, character output speed and classification accuracy are the most important characteristics of the system.Code-modulated visual evoked potentials (c-VEPs) have gathered increasing research interest in the field of Brain–Computer Interfaces (BCIs) [3,4,5,6]

  • We presented a dictionary-driven c-VEP spelling application utilizing n-gram based dictionary suggestions

  • Implementation of flexible time windows were realized, which are rarely seen in c-VEP systems, where typically fixed time windows are used

Read more

Summary

Introduction

A brain–computer interface (BCI) records, analyzes and interprets brain activity of the user and can be used for communication with the external environment, without involving muscle activity [1]. Code-modulated visual evoked potentials (c-VEPs) have gathered increasing research interest in the field of Brain–Computer Interfaces (BCIs) [3,4,5,6]. Stimulus onset markers are typically sent to the EEG hardware These timestamps can be acquired using a photo-resistor or photo-diode attached to the screen [4,8]. Typical use cases of c-VEP BCIs are spelling applications for people with severe disabilities [9]. For these implementations, high classification accuracy and speed are desired. Implementation of a novel software-based synchronization between stimulus presentation and EEG data acquisition, investigation of performance improvements in c-VEP detection utilizing an ensemble-based classification approach, presenting dynamic on-line classification utilizing sliding classification windows and n-gram word prediction. The article evaluates the feasibility of the proposed methods based on a test with healthy participants

Materials and Methods
Participants
Hardware
Stimulus Design
Synchronization
Experimental Procedure
Dictionary Supported Spelling Interface
Spatial Filtering and Template Generation
Sliding Window Mechanism
Results
Discussion
Methods
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.