Abstract

Brain-computer interface (BCI) has been gradually used in human-robot interaction systems. Steady-state visual evoked potential (SSVEP) as a paradigm of electroencephalography (EEG) has attracted more attention in the BCI system research due to its stability and efficiency. However, an independent monitor is needed in the traditional SSVEP-BCI system to display stimulus targets, and the stimulus targets map fixedly to some pre-set commands. These limit the development of the SSVEP-BCI application system in complex and changeable scenarios. In this study, the SSVEP-BCI system integrated with augmented reality (AR) is proposed. Furthermore, a stimulation interface is made by merging the visual information of the objects with stimulus targets, which can update the mapping relationship between stimulus targets and objects automatically to adapt to the change of the objects in the workspace. During the online experiment of the AR-based SSVEP-BCI cue-guided task with the robotic arm, the success rate of grasping is 87.50 ± 3.10% with the SSVEP-EEG data recognition time of 0.5 s based on FB-tCNN. The proposed AR-based SSVEP-BCI system enables the users to select intention targets more ecologically and can grasp more kinds of different objects with a limited number of stimulus targets, resulting in the potential to be used in complex and changeable scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call