Abstract

Objective. Recent attempts in developing brain–computer interface (BCI)-controlled robots have shown the potential of this area in the field of assistive robots. However, implementing the process of picking and placing objects using a BCI-controlled robotic arm still remains challenging. BCI performance, system portability, and user comfort need to be further improved. Approach. In this study, a novel control approach, which combines high-frequency steady-state visual evoked potential (SSVEP)-based BCI and computer vision-based object recognition, is proposed to control a robotic arm for performing pick and place tasks that require control with multiple degrees of freedom. The computer vision can identify objects in the workspace and locate their positions, while the BCI allows the user to select one of these objects to be acted upon by the robotic arm. The robotic arm was programmed to be able to autonomously pick up and place the selected target object without moment-by-moment supervision by the user. Main results. Online results obtained from ten healthy subjects indicated that a BCI command for the proposed system could be selected from four possible choices in 6.5 s (i.e. 2.25 s for visual stimulation and 4.25 s for gaze shifting) with 97.75% accuracy. All subjects could successfully complete the pick and place tasks using the proposed system. Significance. These results demonstrated the feasibility and efficiency of combining high-frequency SSVEP-based BCI and computer vision-based object recognition to control robotic arms. The control strategy presented here could be extended to control robotic arms to perform other complicated tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call