Enhancing motor disability assessment and its imagery classification is a significant concern in contemporary medical practice, necessitating reliable solutions to improve patient outcomes. One promising avenue is the use of brain–computer interfaces (BCIs), which establish a direct communication pathway between users and machines. This technology holds the potential to revolutionize human–machine interaction, especially for individuals diagnosed with motor disabilities. Despite this promise, extracting reliable control signals from noisy brain data remains a critical challenge. In this paper, we introduce a novel approach leveraging the collaborative synergy of five convolutional neural network (CNN) models to improve the classification accuracy of motor imagery tasks, which are essential components of BCI systems. Our method demonstrates exceptional performance, achieving an accuracy of 79.44% on the BCI Competition IV 2a dataset, surpassing existing state-of-the-art techniques in using multiple CNN models. This advancement offers significant promise for enhancing the efficacy and versatility of BCIs in a wide range of real-world applications, from assistive technologies to neurorehabilitation, thereby providing robust solutions for individuals with motor disabilities.
Read full abstract