Abstract

A brain-computer interface (BCI) uses neuronal responses to control external systems. The majority of BCI systems are based on visual stimuli, only few apply auditory input. Because auditory-based BCI do not rely on visual skills or mobility of the body, they could be an alternative for visually or physically disabled people. This study investigates the performance of an auditory paradigm using two competing streams of repeatedly presented speech syllables. The streams had different repetition rates of 2.3 and 3.1 Hz. Our auditory BCI approach uses the auditory steady-state response (ASSR) to automatically detect which stream a listener selectively attends to. In a single trial classification ten healthy volunteers achieved an accuracy significantly above chance of 61% and an information transfer-rate (ITR) of 0.2 bit min−1. The use of the average over six random trials improved the average classification accuracy to 79% while keeping the ITR comparable. In conclusion it is possible to classify ASSR evoked from streams of spoken syllables. For a real life application it is necessary to improve the performance of this auditory BCI, but it is a step towards the long term goal of using BCI on natural speech features and eventually controlling the processing of hearing devices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call