Abstract

In Human-Robot Collaboration setting a robot may be controlled by a user directly or through a Brain-Computer Interface that detects user intention, and it may act as an autonomous agent. As such interaction increases in complexity, conflicts become inevitable. Goal conflicts can arise from different sources, for instance, interface mistakes - related to misinterpretation of human's intention - or errors of the autonomous system to address task and human's expectations. Such conflicts evoke different spontaneous responses in the human's brain, which could be used to regulate intrinsic task parameters and to improve system response to errors - leading to improved transparency, performance, and safety. To study the possibility of detecting interface and agent errors, we designed a virtual pick and place task with sequential human and robot responsibility and recorded the electroencephalography (EEG) activity of six participants. In the virtual environment, the robot received a command from the participants through a computer keyboard or it moved as autonomous agent. In both cases, artificial errors were defined to occur in 20% - 25% of the trials. We found differences in the responses to interface and agent errors. From the EEG data, correct trials, interface errors, and agent errors were truly predicted for 51.62% ± 9.99% (chance level 38.21%) of the pick movements and 46.84%±6.62% (chance level 36.99%) for the place movements in a pseudo-asynchronous fashion. Our study suggests that in a human-robot collaboration setting one may improve the future performance of a system with intention detection and autonomous modes. Specific examples could be Neural Interfaces that replace and restore motor functions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call