Abstract

The ability of today's robots to autonomously support humans in their daily activities is still limited. To improve this, predictive human-machine interfaces (HMIs) can be applied to better support future interaction between human and machine. To infer upcoming context-based behavior relevant brain states of the human have to be detected. This is achieved by brain reading (BR), a passive approach for single trial EEG analysis that makes use of supervised machine learning (ML) methods. In this work we propose that BR is able to detect concrete states of the interacting human. To support this, we show that BR detects patterns in the electroencephalogram (EEG) that can be related to event-related activity in the EEG like the P300, which are indicators of concrete states or brain processes like target recognition processes. Further, we improve the robustness and applicability of BR in application-oriented scenarios by identifying and combining most relevant training data for single trial classification and by applying classifier transfer. We show that training and testing, i.e., application of the classifier, can be carried out on different classes, if the samples of both classes miss a relevant pattern. Classifier transfer is important for the usage of BR in application scenarios, where only small amounts of training examples are available. Finally, we demonstrate a dual BR application in an experimental setup that requires similar behavior as performed during the teleoperation of a robotic arm. Here, target recognition processes and movement preparation processes are detected simultaneously. In summary, our findings contribute to the development of robust and stable predictive HMIs that enable the simultaneous support of different interaction behaviors.

Highlights

  • During the last decades different approaches were developed to support humans in their daily life and working environment or to restore sensory and motor functions with the help of intelligent and autonomous robotic systems that behave situational and support humans according to the context [1,2,3,4,5]

  • We present results on improving the detection accuracy by choosing optimal training windows based on event-related potential (ERP) and machine learning (ML) analysis, i.e., show how to optimally combine different training windows

  • The choice of window, samples used for transfer and the combinations of windows were first defined by knowledge about underlying brain activity gained from average ERP analysis and confirmed by systematic ML analysis

Read more

Summary

Introduction

During the last decades different approaches were developed to support humans in their daily life and working environment or to restore sensory and motor functions with the help of intelligent and autonomous robotic systems that behave situational and support humans according to the context [1,2,3,4,5]. Implicit information is transferred beside explicit information during interaction that can be used by the interacting persons to infer on the general state of each other, like the emotional state, involvement in the interaction or communication or the mental load. This implicit information serves to adapt behavior to interact better, e.g., more efficiently. A promising approach for improving the behavior of autonomous artificial systems is to adapt them with respect to the state of the interacting human Such adaptation of technical systems is in a more general sense known as biocybernetic adaptation [6]. For this aim (psycho)physiological data from the user like galvanic skin response, blood pressure, gesture, eye gaze, mimic, prosody, brain activity or combinations of those are applied [5,6,8,9]

Objectives
Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.