It is ambitious to develop a brain-controlled robotic arm for some patients with motor impairments to perform activities of daily living using brain–computer interfaces (BCIs). Despite much progress achieved, this mission is still very challenging mainly due to the poor decoding performance of BCIs. The problem is even exacerbated in the case of noninvasive BCIs. A shared control strategy is developed in this work to realize flexible robotic arm control for reach and grasp of multiple objects. With the intelligent assistance provided by robot vision, the subject was only required to finish gross reaching movement and target selection using a simple motor imagery-based BCI with binary output. Along with the user control, the robotic arm, which identified and localized potential targets within the workspace in the background, was capable of providing both trajectory correction in the reaching phase to reduce trajectory redundancy and autonomous grasping assistance in the phase of grasp. Ten subjects participated in the experiments containing one session of two-block grasping tasks with fixed locations and another one of randomly placed three-block grasping tasks. The results of the experiments demonstrated substantial improvement with the shared control system. Compared with the single BCI control, the success rate of shared control was significantly higher (<inline-formula> <tex-math notation="LaTeX">$p < 0.001$ </tex-math></inline-formula> for group performance), and moreover, the task completion time and perceived difficulty were significantly lower (<inline-formula> <tex-math notation="LaTeX">$p < 0.001$ </tex-math></inline-formula> for group performance both), indicating the potential of our proposed shared control system in real applications. <i>Note to Practitioners</i>—This article is motivated by the problem of dexterous robotic arm control based on a brain–computer interface (BCI). For people suffering from severe neuromuscular disorders or accident injuries, a brain-controlled robotic arm is expected to provide assistance in their daily lives. A primary bottleneck to achieve the objective is that the information transfer rate of current BCIs is not high enough to produce multiple and reliable commands during the online robotic control. In this work, machine autonomy is incorporated in a BCI-controlled robotic arm system, where the user and machine can work together to reach and grasp multiple objects in a given task. The intelligent robot system autonomously localized the potential targets and provided trajectory correction and grasping assistance accordingly. Meanwhile, the user only needed to complete gross reaching movement and target selection with a basic binary motor imagery-based BCI, which reduced the task difficulty and retained the volitional involvement of the user at the same time. The results of the experiments showed that the accuracy and efficiency of grasping tasks increased significantly in the shared control mode together with a significant decrease in the perceived mental workload, which indicates that our proposed shared control system is effective and user-friendly in practice. In the future, more feedback information will be introduced to enhance the task performance further, and a wheelchair-mounted robotic arm system will be developed for greater flexibility. In addition, more functional task modules (e.g., self-feeding and opening doors) should be integrated for more practical utilities.
Read full abstract