Abstract

ObjectiveThis study addresses the challenge of user-specific bias in Brain-Computer Interfaces (BCIs) by proposing a novel methodology. The primary objective is to employ a hybrid deep learning model, combining 2D Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) layers, to analyze EEG signals and classify imagined tasks. The overarching goal is to create a generalized model that is applicable to a broader population and mitigates user-specific biases. Materials and MethodsEEG signals from imagined motor tasks in the public dataset Physionet form the basis of the study. This is due to the need to use other databases in addition to the BCI competition. A model of arrays emulating the electrode arrangement in the head is proposed to capture spatial information using CNN, and LSTM algorithms are used to capture temporal information, followed by signal classification. ResultsThe hybrid model is implemented to achieve a high classification rate, reaching up to 90% for specific users and averaging 74.54%. Error detection thresholds are set to eliminate subjects with low task affinity, resulting in a significant improvement in classification accuracy of up to 21.34%. ConclusionThe proposed methodology makes a significant contribution to the BCI field by providing a generalized system trained on diverse user data that effectively captures spatial and temporal EEG signal features. This study emphasizes the value of the hybrid model in advancing BCIs, highlighting its potential for improved reliability and accuracy in human-computer interaction. It also suggests the exploration of additional advanced layers, such as transformers, to further enhance the proposed methodology.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call