Abstract

Objective. In recent years, imagined speech brain–computer (machine) interface applications have been an important field of study that can improve the lives of patients with speech problems through alternative verbal communication. This study aims to classify the imagined speech of numerical digits from electroencephalography (EEG) signals by exploiting the past and future temporal characteristics of the signal using several deep learning models. Approach. This study proposes a methodological combination of EEG signal processing techniques and deep learning models for the recognition of imagined speech signals. EEG signals were filtered and preprocessed using the discrete wavelet transform to remove artifacts and retrieve feature information. To classify the preprocessed imagined speech neural signals, multiple versions of multilayer bidirectional recurrent neural networks were used. Main results. The method is examined by leveraging MUSE and EPOC signals from MNIST imagined digits in the MindBigData open-access database. The presented methodology’s classification performance accuracy was noteworthy, with the model’s multiclass overall classification accuracy reaching a maximum of 96.18% on MUSE signals and 71.60% on EPOC signals. Significance. This study shows that the proposed signal preprocessing approach and the stacked bidirectional recurrent network model are suitable for extracting the high temporal resolution of EEG signals in order to classify imagined digits, indicating the unique neural identity of each imagined digit class that distinguishes it from the others.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call