Abstract

People with severe neuromuscular disorders caused by an accident or congenital disease cannot normally interact with the physical environment. The intelligent robot technology offers the possibility to solve this problem. However, the robot can hardly carry out the task without understanding the subject's intention as it relays on speech or gestures. Brain-computer interface (BCI), a communication system that operates external devices by directly converting brain activity into digital signals, provides a solution for this. In this study, a noninvasive BCI-based humanoid robotic system was designed and implemented for home service. A humanoid robot that is equipped with multi-sensors navigates to the object placement area under the guidance of a specific symbol "Naomark", which has a unique ID, and then sends the information of the scanned object back to the user interface. Based on this information, the subject gives commands to the robot to grab the wanted object and give it to the subject. To identify the subject's intention, the channel projection-based canonical correlation analysis (CP-CCA) method was utilized for the steady state visual evoked potential-based BCI system. The offline results showed that the average classification accuracy of all subjects reached 90%, and the online task completion rate was over 95%. Users can complete the grab task with minimum commands, avoiding the control burden caused by complex commands. This would provide a useful assistance means for people with severe motor impairment in their daily life.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call