Hybrid BCI-based instruction set for dual robotic arm control using EEG and eye movement signals
A brain-computer interface (BCI) establishes a pathway for information transmission between a human (or animal) and an external device. It can be used to control devices such as prosthetic limbs and robotic arms, which in turn assist, rehabilitate, and enhance human limb function. At present, although most studies focus on brain signal acquisition, feature extraction and recognition, and further explore the use of brain signals to control external devices, the features obtained via noninvasive approaches are fewer and less robust, which makes it difficult to directly control devices with more degrees of freedom such as robotic arms. To address these issues, we propose an extended instruction set based on motor imagery that fuses eye-movement signals and electroencephalogram (EEG) signals for motion control of a dual collaborative robotic arm. The method incorporates spatio-temporal convolution and attention mechanisms for brain-signal classification. Starting from a small base of control commands, the hybrid BCI combining eye-movement signals and EEG expands the command set, enabling motion control of the dual cooperative manipulator. On the Webots simulation platform, we carried out kinematic control and three-dimensional motion simulation of a dual 6-degree-of-freedom collaborative robotic arm (UR3e). The experimental results demonstrate the feasibility of the proposed method. Our algorithm achieves an average accuracy of 83.8% with only 8.8k parameters, and the simulation results are within the expected range. The results demonstrate that the proposed extended instruction set based on motor imagery is effective not only for controlling dual collaborative robotic arms to perform grasping tasks in complex scenarios, but also for operating other multi-degree-of-freedom peripheral devices.
- Research Article
1026
- 10.1016/s1474-4422(08)70223-0
- Oct 2, 2008
- The Lancet Neurology
Brain–computer interfaces in neurological rehabilitation
- Research Article
- 10.5075/epfl-thesis-6870
- Jan 1, 2016
Brain-machine interfaces (BMIs) allow the user to control an external device such as a robotic arm, a cursor, or an avatar in a virtual world through the real-time decoding of brain signals and without the involvement of the musculoskeletal system. Although BMIs hold great promise for providing motor-impaired patients with means of control and interaction with the external world, the neurocognitive mechanisms that are involved in the use of a BMI remain poorly understood. In addition, BMIs allow researchers to investigate and separate the brain processes underlying cognitive functions from those related to motor control and afferent sensory signals that reliably accompany movements. Thus, BMIs are powerful tools for basic and applied neuroscience research. The present work investigates different stages of a BMI-mediated action and targets the subjective, behavioral, and neural signals of BMI control based on electroencephalography (EEG) signals. More specifically, the main study of this thesis has focused on the development of a multimodal imaging platform that allows EEG-based BMI control inside the magnetic resonance imaging (MRI) scanner and during the acquisition of functional MRI (fMRI). This platform has enabled us to exploit both the high temporal resolution of EEG and the high spatial resolution of fMRI to precisely identify the brain mechanisms underlying BMI control and those reflecting the subjective feeling of being in control over a BMI-action (i.e., the sense of agency, SoA). In this study we found an extended cortico-subcortical network involved in operating a motor-imagery BMI. Overall BMI performance was associated with activity in a set of regions including contralateral premotor cortex and the posterior cingulate cortex. Finally, cortical midline regions and the basal ganglia were involved in the subjective sense of controlling the BMI. In a second study, we further investigated whether the ability to control a BMI relates to the ability to perform motor imagery. Despite decades of technical advances, effective BMI-control remains limited to a subset of users. We show that inter-subject variability in BMI proficiency is associated with differences in motor imagery accuracy as captured by subjective and behavioral measurements, pointing to a prominent role of kinesthetic rather than visual imagery. We also identified enhanced lateralized IŒ-band oscillations over sensorimotor cortices during motor imagery in high- versus low-aptitude BMI users. Finally, we developed a novel paradigm for joint BMI actions, allowing two users to be jointly engaged in BMI control; our preliminary data provide evidence in support of the hypothesis that joint actions yield BMI-performance improvement even in absence of a physical connection. Our findings also show that during joint BMI-control the SoA over the imagery-mediated actions is significantly enhanced, and is affected by BMI abilities. This work show the potential of applying a multimodal imaging approach to the field of BMI: exploiting our EEG-BMI-fMRI platform we were able to identify, the neural mechanisms involved in motor and cognitive aspects of imagery-mediated BMI. Furthermore, our results enable us to better understand the subjective and behavioral aspects of BMI-actions, and reveal strategies that could potentially reinforce BMI control. Our findings can ultimately be of relevance for the field of neuroprosthetics and make BMI more accessible to a broader range of users.
- Conference Article
2
- 10.1109/m2vip49856.2021.9665060
- Nov 26, 2021
A brain-computer interface (BCI) can help establish a bridge between humans and external devices, like assisting stroke patients with several daily tasks. Motor Imagery (MI) electroencephalogram plays a critical role in BCI applications. To address the problem of lack of control instruction sets for brain-computer interfaces based on motor imagery, this paper proposes the use of a variable-length coding strategy to control the movement of a KUKA robotic arm with multiple degrees of freedom (DOF). The motor imagery EEG signals are classified into four categories by using common spatial pattern (CSP) for feature extraction and support vector machine (SVM) for classification. The classification results of the left-hand and right-hand motor imagery are encoded with binary Huffman Coding, and the codewords are mapped to 7 degrees of freedom of the robotic arm. The number of occurrences of each Huffman codeword is used to control the rotation angle of the corresponding degree of freedom. The classification results of tongue and feet motor imagery are used to enter and exit the selection of the degrees of freedom of the robotic arm, respectively. In the designed object grasping and placing scene, the instruction control using Huffman variable-length coding strategy reduces the time by 19 and 18 times, and the efficiency increases by 23.4% and 20.2%, respectively, compared with the use of binary fixed-length coding. Experimental results demonstrate that the use of Huffman coding enables the control of multiple degrees of freedom of a robotic arm using finite motor imagery classification categories. The method can improve the information transfer efficiency and expand the set of control instructions for brain-computer interfaces, which provides new ideas for the application scenarios of motor imagery-based brain-computer interfaces.
- Research Article
1
- 10.1504/ijscc.2018.10010439
- Jan 1, 2018
- International Journal of Systems, Control and Communications
The purpose of brain machine interface (BMI) is to provide a communication path between brain signals and external devices. Use of brain signals for restoring function of impaired body parts using scientific methods require brain machine interface. This article presents BMI to control five-fingered prosthetic hand using the electroencephalography (EEG) signals. The core objective of the system is to develop a control system that will be able to communicate with the brain thoughts. EEG signals captured from the human scalp are used as information carrier for brain control interface (BCI) system. This research article discusses the development of a system to assist disable persons using EEG based signals.
- Research Article
1
- 10.11834/jig.230031
- Jan 1, 2023
- Journal of Image and Graphics
A survey on encoding and decoding technology of non-invasive brain-computer interface
- Conference Article
3
- 10.1109/hora55278.2022.9800002
- Jun 9, 2022
- HORA 2022 - 4th International Congress on Human-Computer Interaction, Optimization and Robotic Applications, Proceedings
A brain-computer interface (BCI) is a connection path among brain and an external device. Motor imagery (MI) is proven to be a useful cognitive technique for enhancing motor skills as well as for movement disorder rehabilitation therapy. It is known that the efficiency of MI training can be enhanced by using BCI approach, which provides real-time feedback on the mental attempts of the subject. Artificial intelligence (AI) methods play a key role in detecting changes in brain signals and converting them into appropriate control signals. In this paper, we focus on brain signals that have been obtained from the scalp to control assistive devices. In addition, signal denoising, feature extraction, dimension reduction, and AI techniques utilized for EEG-based BCI are evaluated. Moreover, Bagging and Adaboost are utilized to classify MI task for BCI using EEG signals. Different classifiers are used to enhance the performance of detecting the signals from the brain and make it on the real time and controlling any lateness. MI related brain activities can be categorized efficiently via AI techniques. This paper utilizes wavelet packet decomposition feature extraction approach to improve MI recognition accuracy. The proposed approach classifies MI-related brain signals using ensemble techniques. The results show that the proposed framework surpasses the traditional machine learning approaches. Furthermore, the proposed Adaboost with k-NN ensemble approach also yields a greater performance for MI classification with 94.57% classification accuracy for subject independent case.
- Research Article
12
- 10.1007/s12193-020-00358-4
- Jan 25, 2021
- Journal on Multimodal User Interfaces
Brain Computer Interface is an interesting and important research field that has contributed widespread application systems. In the medical field, it is important for physically challenged persons to aid in rehabilitation and restoration. In Brain Computer Interface, computer acts as interface between brain signals and external device. The computer processes the brain signals and sends necessary instructions to external device. The external device helps in restoring the movement ability of patient. Motor imagery is the imagination of motor movements like hand, foot and tongue. There is an associated brain signal when the normal person moves their hand, foot and tongue. Similarly, there is an associated brain signal when the physically challenged person imagines moving their hand, foot and tongue. When this brain signal is analyzed by brain computer interface, it can facilitate motor movements through external device. The aim of this work is to analyze and classify the brain signals for motor movements to aid in rehabilitation and restoration. In this paper BCI Competition IV Dataset I, Dataset IIa, BCI Competition III Dataset IIIa and Neuroprosthetic EEG Dataset are analyzed A novel optimization technique with Neighborhood Decision Theoretic Rough Set under Dynamic Granulation is proposed for motor imagery classification. Neighborhood based Decision Theoretic Rough Set under Dynamic Granulation (NDTRS under DG) is hybrid approach combining two algorithms Neighborhood Rough Set and Decision Theoretic Rough Set under Dynamic Granulation ((DTRS under DG). Neighborhood Rough Set overcomes the drawback of discretization step in Rough Set. Decision Theoretic Rough Set under Dynamic Granulation algorithm has loss function for classification. The effectiveness of classification is improved since the loss function is involved in the construction of algorithm. The proposed method Neighborhood based Decision Theoretic Rough Set under Dynamic Granulation gives higher classification accuracy compared to existing approaches.
- Book Chapter
3
- 10.1007/978-3-642-24091-1_60
- Jan 1, 2011
To enhance human interaction with machines, research interest is growing to develop a ’Brain-Computer Interface’ (BCI), which allows communication of a human with a machine only by use of brain signals. In this paper, one type of pocket PC game was designed for application of brain computer interfaces. In this system, The cerebral cortex EEG based on motor imagery were fed into the input of signal processing module, and then classification algorithm module of motor imagery deal with this signal. Output results for classification of motor imagery were converted to control the role in the games. The result of experiment shows that BCI technology not only can be used for rehabilitation, but also can be used for general public entertainment.KeywordsBrain-Computer InterfacePocket PC GameEEG
- Conference Article
13
- 10.1109/smc.2019.8914058
- Oct 1, 2019
Brain-machine interface (BMI) provides a new control strategy for both patients and healthy people. An endogenous paradigm such as motor imagery (MI) for BMI is commonly used for detecting user intention without external stimuli. However, manipulating the dexterous robotic arm by using limited MI commands is challenging issues. In this paper, we designed a shared robotic arm control system using the intuitive MI and vision guidance. To accomplish the user's intention on the robotic arm, we used arm reach MI (left, right, and forward), hand grasp MI, and wrist twist MI by using electroencephalogram (EEG) signals. The Kinect sensor is used to match the decoded user intention with the detected object based on the location of the workspace. In addition, to decode intuitive MI successfully, we propose a novel convolutional neural network (CNN) based user intention decoding model. Ten subjects participated in our experiments, and five of them were selected to perform online tasks. The proposed method could decode various user intention (five intuitive MI classes and resting state) with a grand-averaged classification accuracy of 55.91% in offline analysis. For sufficient control on the online shared robotic arm control, the proposed online system was only started, once the patient shows higher performance than 60% in the offline analysis. For the online drinking tasks, we confirmed the averaged 78% success rate. Hence, we confirmed the possibility of the shared robotic arm control based on intuitive BMI and vision guidance with high performance.
- Book Chapter
1
- 10.1201/9781003146810-11
- Jun 21, 2021
Brain Computer Interface (BCI) is a system of software and hardware that establishes a direct connection between brain and external devices by making use of brain signals. Various types of imaging techniques such as Electroencephalography (EEG), Magnetoencephalography (MEG), Functional magnetic resonance imaging (fMRI), Functional near-infrared spectroscopy (fNIRS), etc. can be used to devise BCI systems. This chapter gives an overview of basic components and working of BCI systems, emphasizing signal acquisition, brain signal patterns and signal processing methods. The chapter begins with an introduction to various neuroimaging techniques relevant to BCI applications. The main focus will be on EEG based BCIs which are more popular among researchers due to the simplicity of application. The chapter further explores the BCI systems based on different brain signal patterns such as Slow Cortical Potential (SCP), Sensorimotor Rhythms (SMR), P300 Event-Related Potentials (ERP) and Steady-State Evoked Potentials (SSEPs) with the help of existing literature. Techniques to improve signal quality and feature extraction are also reviewed. Further, the commonly used data classifiers or algorithms are discussed. These algorithms determine the user’s intention by classifying the features extracted from acquired signals. The chapter concludes with a summary of the software tools available for BCI research.
- Book Chapter
8
- 10.5772/55166
- Jun 5, 2013
A brain-computer interface (BCI) provides a direct functional interaction between the human brain and the external device. Many kinds of signals (from electromagnetic to metabolic [23, 38, 42]) could be used in BCI. However the most widespread BCI systems are based on EEG recordings. BCI consists of a brain signal acquisition system, data processing software for feature extraction and pattern classification, and a system to transfer commands to an external device and, thus, providing feedback to an operator. The most prevalent BCI systems are based on the discrimination of EEG patterns related to execution of different mental tasks [14, 21, 24]. This approach is justified by the presence of correlation between brain signal features and tasks performed, revealed by basic research [24, 28, 30, 45]. By agreement with the BCI operator each mental task is associated with one of the commands to the external device. Then to produce the commands, the operator switches voluntary between corresponding mental tasks. If BCI is dedicated to control device movements then psychologically convenient mental tasks are motor imaginations. For example, when a patient controls by BCI the movement of a wheelchair its movement to the left can be associated with the imagination of the left arm movement and movement to the right with right arm movement. Another advantage of these mental tasks is that their performance is accompanied by the easily recognizable EEG patterns. Moreover, motor imagination is considered now as an efficient rehabilitation procedure to restore movement after paralysis [4]. Thus, namely the analysis of BCI performance based on motor imagination is the object of the present chapter.
- Research Article
17
- 10.1038/s41598-022-15813-3
- Jul 11, 2022
- Scientific reports
Over the past few years, the processing of motor imagery (MI) electroencephalography (EEG) signals has been attracted for developing brain-computer interface (BCI) applications, since feature extraction and classification of these signals are extremely difficult due to the inherent complexity and tendency to artifact properties of them. The BCI systems can provide a direct interaction pathway/channel between the brain and a peripheral device, hence the MI EEG-based BCI systems seem crucial to control external devices for patients suffering from motor disabilities. The current study presents a semi-supervised model based on three-stage feature extraction and machine learning algorithms for MI EEG signal classification in order to improve the classification accuracy with smaller number of deep features for distinguishing right- and left-hand MI tasks. Stockwell transform is employed at the first phase of the proposed feature extraction method to generate two-dimensional time–frequency maps (TFMs) from one-dimensional EEG signals. Next, the convolutional neural network (CNN) is applied to find deep feature sets from TFMs. Then, the semi-supervised discriminant analysis (SDA) is utilized to minimize the number of descriptors. Finally, the performance of five classifiers, including support vector machine, discriminant analysis, k-nearest neighbor, decision tree, random forest, and the fusion of them are compared. The hyperparameters of SDA and mentioned classifiers are optimized by Bayesian optimization to maximize the accuracy. The presented model is validated using BCI competition II dataset III and BCI competition IV dataset 2b. The performance metrics of the proposed method indicate its efficiency for classifying MI EEG signals.
- Research Article
7
- 10.22146/ijitee.48110
- Dec 11, 2019
- IJITEE (International Journal of Information Technology and Electrical Engineering)
EEG signals are obtained from an EEG device after recording the user's brain signals. EEG signals can be generated by the user after performing motor movements or imagery tasks. Motor Imagery (MI) is the task of imagining motor movements that resemble the original motor movements. Brain Computer Interface (BCI) bridges interactions between users and applications in performing tasks. Brain Computer Interface (BCI) Competition IV 2a was used in this study. A fully automated correction method of EOG artifacts in EEG recordings was applied in order to remove artifacts and Common Spatial Pattern (CSP) to get features that can distinguish motor imagery tasks. In this study, a comparative studies between two deep learning methods was explored, namely Deep Belief Network (DBN) and Long Short Term Memory (LSTM). Usability of both deep learning methods was evaluated using the BCI Competition IV-2a dataset. The experimental results of these two deep learning methods show average accuracy of 50.35% for DBN and 49.65% for LSTM.
- Book Chapter
- 10.1007/978-981-19-7943-9_7
- Jan 1, 2022
Brain-computer interface (BCI) is a new interaction model that directly connects the human brain or animal brain with external devices, which has a wide range of application scenarios. Through the BCI technology based on electroencephalography (EEG) signal, the communication and control of external devices can be realized independently of the peripheral nervous system and muscle tissue. Motor imagery (MI) is a process in which people imagine their limbs or muscles moving, to control some external auxiliary devices (wheelchairs, robotic arms, robots etc.) so that people without motor ability can restore their communication and motor ability to a certain extent. In this paper, the basic situation of EEG and EEG signal acquisition is introduced first. Then, the analysis methods and research contents of EEG signal preprocessing, feature extraction, and feature classification based on motor imagery are introduced in detail. Finally, the brain-computer interface technology based on motor imagery is summarized and prospected.KeywordsBrain-computer Interface (BCI)Motor Imagery (MI)EEG signalsFeature extraction
- Research Article
7
- 10.3389/fnbot.2024.1343249
- Jan 30, 2024
- Frontiers in Neurorobotics
As an interactive method gaining popularity, brain-computer interfaces (BCIs) aim to facilitate communication between the brain and external devices. Among the various research topics in BCIs, the classification of motor imagery using electroencephalography (EEG) signals has the potential to greatly improve the quality of life for people with disabilities. This technology assists them in controlling computers or other devices like prosthetic limbs, wheelchairs, and drones. However, the current performance of EEG signal decoding is not sufficient for real-world applications based on Motor Imagery EEG (MI-EEG). To address this issue, this study proposes an attention-based bidirectional feature pyramid temporal convolutional network model for the classification task of MI-EEG. The model incorporates a multi-head self-attention mechanism to weigh significant features in the MI-EEG signals. It also utilizes a temporal convolution network (TCN) to separate high-level temporal features. The signals are enhanced using the sliding-window technique, and channel and time-domain information of the MI-EEG signals is extracted through convolution. Additionally, a bidirectional feature pyramid structure is employed to implement attention mechanisms across different scales and multiple frequency bands of the MI-EEG signals. The performance of our model is evaluated on the BCI Competition IV-2a dataset and the BCI Competition IV-2b dataset, and the results showed that our model outperformed the state-of-the-art baseline model, with an accuracy of 87.5 and 86.3% for the subject-dependent, respectively. In conclusion, the BFATCNet model offers a novel approach for EEG-based motor imagery classification in BCIs, effectively capturing relevant features through attention mechanisms and temporal convolutional networks. Its superior performance on the BCI Competition IV-2a and IV-2b datasets highlights its potential for real-world applications. However, its performance on other datasets may vary, necessitating further research on data augmentation techniques and integration with multiple modalities to enhance interpretability and generalization. Additionally, reducing computational complexity for real-time applications is an important area for future work.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.