Abstract

As autonomous service robots become more affordable and thus available for the general public, there is a growing need for user-friendly interfaces to control these systems. Control interfaces typically get more complicated with increasing complexity of robotic tasks and environments. Traditional control modalities such as touch, speech or gesture are not necessarily suited for all users. While some users can make the effort to familiarize themselves with a robotic system, users with motor disabilities may not be capable of controlling such systems even though they need robotic assistance most. In this paper, we present a novel framework that allows these users to interact with a robotic service assistant in a closed-loop fashion, using only thoughts. The system is composed of several interacting components: a brain–computer interface (BCI) that uses non-invasive neuronal signal recording and co-adaptive deep learning, high-level task planning based on referring expressions, navigation and manipulation planning as well as environmental perception. We extensively evaluate the BCI in various tasks, determine the performance of the goal formulation user interface and investigate its intuitiveness in a user study. Furthermore, we demonstrate the applicability and robustness of the system in real-world scenarios, considering fetch-and-carry tasks, close human–robot interactions and in presence of unexpected changes. As our results show, the system is capable of adapting to frequent changes in the environment and reliably accomplishes given tasks within a reasonable amount of time. Combined with high-level task planning based on referring expressions and an autonomous robotic system, interesting new perspectives open up for non-invasive BCI-based human–robot interactions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call