Abstract

Current EEG-based brain-computer interface technologies mainly focus on how to independently use SSVEP, motor imagery, P300, or other signals to recognize human intention and generate several control commands. SSVEP and P300 require external stimulus, while motor imagery does not require it. However, the generated control commands of these methods are limited and cannot control a robot to provide satisfactory service to the user. Taking advantage of both SSVEP and motor imagery, this paper aims to design a hybrid BCI system that can provide multimodal BCI control commands to the robot. In this hybrid BCI system, three SSVEP signals are used to control the robot to move forward, turn left, and turn right; one motor imagery signal is used to control the robot to execute the grasp motion. In order to enhance the performance of the hybrid BCI system, a visual servo module is also developed to control the robot to execute the grasp task. The effect of the entire system is verified in a simulation platform and a real humanoid robot, respectively. The experimental results show that all of the subjects were able to successfully use this hybrid BCI system with relative ease.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.