Abstract

This paper presents an important step forward towards increasing the independence of people with severe motor disabilities, by using brain–computer interfaces to harness the power of the Internet of Things. We analyze the stability of brain signals as end-users with motor disabilities progress from performing simple standard on-screen training tasks to interacting with real devices in the real world. Furthermore, we demonstrate how the concept of shared control—which interprets the user's commands in context—empowers users to perform rather complex tasks without a high workload. We present the results of nine end-users with motor disabilities who were able to complete navigation tasks with a telepresence robot successfully in a remote environment (in some cases in a different country) that they had never previously visited. Moreover, these end-users achieved similar levels of performance to a control group of 10 healthy users who were already familiar with the environment.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.