Imagined speech is a process in which a person imagines words without saying them. Electroencephalogram (EEG)-based brain-computer interfaces (BCI) systems help in automatically identifying imagined speech to facilitate persons with severe brain disorders. Extracting meaningful information from the raw EEG signal is a challenging task due to the nonstationary nature of EEG signals. Decomposing a signal into several sub-bands (SB) using rational dilation wavelet transform (RADWT) requires selecting predefined factual parameters, which is an arduous task. The main objective of this study is to propose an adaptive RADWT method capable of decomposing EEG signals by adaptively selecting the tuning parameters and classifying the EEG signals into distinct categories. The optimum tuning parameters of RADWT are obtained using particle swarm optimization and used to decompose the EEG signals into several SBs. Several statistical features are elicited from each SB and used to input six different machine learning algorithms. This work employs a 64-channel EEG dataset recorded from 15 healthy people for three categories: long words, short words, and vowels. The performance of the proposed AISR system is evaluated using seven performance evaluation metrics: accuracy, recall, precision, Cohen’s kappa, F1-score, and area-under-the-curve. The proposed system achieved the average classification accuracies of 87.26±1.12%, 89.23±0.95%, 95.5±0.68%, and 92.16±0.83% for long words, short-long words, short words, and vowels, respectively. When compared to the existing state-of-the-art, the proposed non-parametric decomposition approach and the Bagging algorithm achieved a 3%-5% improvement. The performance of the proposed method is validated using an open-access dataset.
Read full abstract