BackgroundThe electroencephalogram (EEG) based brain-computer interface (BCI) system employing imagined speech serves as a mechanism for decoding EEG signals to facilitate control over external devices or communication with the external world at the moment the user desires. To effectively deploy such BCIs, it is imperative to accurately discern various brain states from continuous EEG signals when users initiate word imagination. New methodThis study involved the acquisition of EEG signals from 15 subjects engaged in four states: resting, listening, imagined speech, and actual speech, each involving a predefined set of 10 words. The EEG signals underwent preprocessing, segmentation, spatio-temporal and spectral analysis of each state, and functional connectivity analysis using the phase locking value (PLV) method. Subsequently, five features were extracted from the frequency and time-frequency domains. Classification tasks were performed using four machine learning algorithms in both pair-wise and multiclass scenarios, considering subject-dependent and subject-independent data. ResultsIn the subject-dependent scenario, the random forest (RF) classifier achieved a maximum accuracy of 94.60 % for pairwise classification, while the artificial neural network (ANN) classifier achieved a maximum accuracy of 66.92 % for multiclass classification. In the subject-independent scenario, the random forest (RF) classifier achieved maximum accuracies of 81.02 % for pairwise classification and 55.58 % for multiclass classification. Moreover, EEG signals were classified based on frequency bands and brain lobes, revealing that the theta (θ) and delta (δ) bands, as well as the frontal and temporal lobes, are sufficient for distinguishing between brain states. ConclusionThe findings promise to develop a system capable of automatically segmenting imagined speech segments from continuous EEG signals.
Read full abstract