Abstract

The electroencephalogram (EEG) of the patient is used to identify their motor intention, which is then converted into a control signal through a brain-computer interface (BCI) based on motor imagery. Whenever gathering features from EEG signals, making a BCI is difficult in part because of the enormous dimensionality of the data. Three stages make up the suggested methodology: pre-processing, extraction of features, selection, and categorization. To remove unwanted artifacts, the EEG signals are filtered by a fifth-order Butterworth multichannel band-pass filter. This decreases execution time and memory use, both of which improve system performance. Then a novel multichannel optimized CSP-ICA feature extraction technique is used to separate and eliminate non-discriminative information from discriminative information in the EEG channels. Furthermore, CSP uses the concept of an Artificial Bee Colony (ABC) algorithm to automatically identify the simultaneous global ideal frequency band and time interval combination for the extraction and classification of common spatial pattern characteristics. Finally, a Tunable optimized feed-forward neural network (FFNN) classifier is utilized to extract and categorize the temporal and frequency domain features, which employs an FFNN classifier with Tunable-Q wavelet transform. The proposed framework, therefore optimizes signal processing, enabling enhanced EEG signal classification for BCI applications. The result shows that the models that use Tunable optimized FFNN produce higher classification accuracy of more than 20% when compared to the existing models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call