Abstract

Brain-computer interface (BCI) systems are designed to translate the measured brain signals into a set of instructions, with a purpose of interaction with outside world. Most common application of such systems is to provide aid to patients with limited physical capabilities, where the visual stimulation based BCI are most commonly used. However, in the most extreme situations, e.g. for patients with amyotrophic lateral sclerosis, visual stimulation approach is insufficient for high-level interaction due to the patient's limited eye movement capabilities. To overcome these limitations, we propose a novel design of the two-stage auditory BCI system which exposes a subject to a stream of letter utterances, such that subject is directly stimulated with letter pronunciation, decreasing the mental effort required by the subject. This is contrary to frequently used auditory spellers, which use various non-related sounds representing a specific letter (i.e. various instruments, natural sounds, etc.). Letter discrimination between targets and non-targets is performed with convolutional neural network, with convolutional spatial and temporal filters capable of efficient extraction of event related brain activity features. Second stage of the target letter discrimination is proposed in order to increase the accuracy and usability in the real-world situations, including the reduced stimuli set based on results from the first classification stage. The proposed BCI has been tested on ten healthy subjects, achieving on average spelling accuracy of 30% (max. 100%) and high information transfer rate of 2.38 bits/min (max. 8.14 bits/min), outperforming the state-of-the-art auditory BCI spellers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call