Abstract
Brain-computer interfaces (BCI) based on the event-related potential (ERP) have been studied widely in the past decade. These BCIs exploit stimuli, called oddballs, which are presented on a computer screen in an arbitrary fashion to implement a binary selection mechanism. The potential has been linked to human surprise, meaning that potentials are triggered by unpredictable events. This hypothesis is the basis of the oddball paradigm. In this work, we go beyond the standard paradigm and exploit the in a more natural fashion for shaping human-robot interaction (HRI). In HRI a flawless behavior of the robot is essential to avoid confusion or anxiety of the human user when interacting with the robot. Detecting such reactions in the human user on the fly and providing instantaneous feedback to the robot is crucial. Ideally, the feedback system does not demand additional cognitive loads and operates automatically in the background. In other words, providing feedback from the human user to the robot should be an inherent feature of the human-machine interaction framework. Information extracted from the human EEG, in particular the P300, is a well-suited candidate for serving as input to this feedback loop. We propose to use as a means for human-robot interaction, in particular to spot the surprises of the human user during interaction to detect in time any mistakes in robot behavior the human user observes. In this way, the robot can notice its mistakes as early as possible and correct them accordingly. Our brain-robot interface implementing the proposed feedback system consists of the following core modules: (1) a P300 spotter that analyzes the incoming preprocessed data stream for identifying potentials on a single-trial basis and (2) a translation module that translates the detected P300s into appropriate feedback signals to the robot. The classification relies on a supervised machine learning algorithm that requires labeled training data. This data must be collected subject-wise to account for the high inter-subject variances typically found in EEG data. The off-line training needs to be carried out only once prior to using the interface. The trained classifier is then employed for on-line detection of signals. During the online operation, the incoming multi-channel EEG data is recorded and analyzed continuously. Each incoming new sample vector is added to a new window. Spectral, spatial and temporal features are extracted from the filtered windows. The resulting feature vectors are classified and a probability that the vector contains a is assigned. Eventually, a feedback signal to the robot is generated based on the classification result, either a class label or a probability between 0 and 1. The proposed framework was tested off-line in a scenario using Honda's humanoid robot ASIMO. This scenario is suited for eliciting events in a controlled experimental environment without neglecting the constraints of real robots. We recorded EEG data during interaction with ASIMO and applied our method off-line. In the future we plan to extend our system to a fully on-line operating framework.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.