AbstractOne of the expectations for the next generation of industrial robots is to work collaboratively with humans as robotic co-workers. Robotic co-workers must be able to communicate with human collaborators intelligently and seamlessly. However, industrial robots in prevalence are not good at understanding human intentions and decisions. We demonstrate a steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) which can directly deliver human cognition to robots through a headset. The BCI is applied to a part-picking robot. The BCI sends decisions to the robot while operators visually inspecting the quality of parts. The BCI is verified through a human subject study. In the study, a camera by the side of the conveyor takes photos of each industrial part and presents it to the operator automatically. When the operator looks at the photo, the electroencephalography (EEG) is collected through the BCI. The inspection decision is extracted through SSVEPs in EEG. When a defective part is identified by the operator, the signal is communicated to the robot, which locates the defective part by a second camera and removes it from the conveyor. The robot can grasp various part with our random grasp planning algorithm (2FRG). We have developed a CNN-CCA model for SSVEP extraction. The model is trained on a dataset collected in our offline experiment. Our approach outperforms the existing CCA, CCA-SVM, and PSD-SVM models. The CNN-CCA model is further validated in an online experiment and achieved 93% accuracy in identifying and removing defective parts.
Read full abstract