Abstract
The potential and utility of inner speech is pivotal for developing practical, everyday Brain-Computer Interface (BCI) applications, as it represents a type of brain signal that operates independently of external stimuli however it is largely underdeveloped due to the challenges faced in deciphering its signals. In this study, we evaluated the behaviors of various Machine Learning (ML) and Deep Learning (DL) models on a publicly available dataset, employing popular preprocessing methods as feature extractors to enhance model training. We face significant challenges like subject-dependent variability, high noise levels, and overfitting. To address overfitting in particular, we propose using “BruteExtraTree”: a new classifier which relies on moderate stochasticity inherited from its base model, the ExtraTreeClassifier. This model not only matches the best DL model, ShallowFBCSPNet, in the subject-independent scenario in our experiments scoring 32% accuracy, but also surpasses the state-of-the-art by achieving 46.6% average per-subject accuracy in the subject-dependent case. Our results on the subject-dependent case show promise on the possibility of a new paradigm for using inner speech data inspired from LLM pretraining but we also highlight the crucial need for a drastic change in data recording or noise removal methods to open the way for more practical accuracies in the subject-independent case.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have