Abstract

Background and ObjectiveThis paper investigates a novel way to interact with home appliances via a brain-computer interface (BCI), using electroencephalograph (EEG) signals acquired from around the user's ears with a custom-made wearable BCI headphone. MethodsThe users engage in speech imagery (SI), a type of mental task where they imagine speaking out a specific word without producing any sound, to control an interactive simulated home appliance. In this work, multiple models are employed to improve the performance of the system. Temporally-stacked multi-band covariance matrix (TSMBC) method is used to represent the neural activities during SI tasks with spatial, temporal, and spectral information included. To further increase the usability of our proposed system in daily life, a calibration session, where the pre-trained models are fine-tuned, is added to maintain performance over time with minimal training. Eleven participants were recruited to evaluate our method over three different sessions: a training session, a calibration session, and an online session where users were given the freedom to achieve a given goal on their own. ResultsIn the offline experiment, all participants were able to achieve a classification accuracy significantly higher than the chance level. In the online experiments, a few participants were able to use the proposed system to freely control the home appliance with high accuracy and relatively fast command delivery speed. The best participant achieved an average true positive rate and command delivery time of 0.85 and 3.79 s/command, respectively. ConclusionBased on the positive experimental results and user surveys, the novel ear-EEG-SI-based BCI paradigm is a promising approach for the wearable BCI system for daily life.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call