Abstract

A Brain-Computer Interface (BCI) acts as a communication mechanism using brain signals to control external devices. The generation of such signals is sometimes independent of the nervous system, such as in Passive BCI. This is majorly beneficial for those who have severe motor disabilities. Traditional BCI systems have been dependent only on brain signals recorded using Electroencephalography (EEG) and have used a rule-based translation algorithm to generate control commands. However, the recent use of multi-sensor data fusion and machine learning-based translation algorithms has improved the accuracy of such systems. This paper discusses various BCI applications such as tele-presence, grasping of objects, navigation, etc. that use multi-sensor fusion and machine learning to control a humanoid robot to perform a desired task. The paper also includes a review of the methods and system design used in the discussed applications.

Highlights

  • Brain-Computer Interfaces (BCIs) lie at the intersection of signal processing, machine learning, and robotics systems

  • This paper reviews various applications in which a humanoid is controlled using brain signals for performing a wide variety of applications such as grasping of objects, navigation, telepresence etc.; For each of the applications, we discuss the overview of the application, system design, and results associated with the experiments conducted; in this review, we consider BCI applications which use just EEG signals, applications which use multisensor fusion where in addition to EEG, other sensor inputs are considered for execution of the desired task (Section 4), as well as augmented reality-assisted BCI (Section 5); To the best of our knowledge, this work is the first review on BCI-controlled humanoids

  • Two major techniques used for the implementation of this application were (i) programming by demonstration in which the robot learns a task by observing someone performing it, and (ii) BCI-based control in which the brain signal generated by the visual stimuli is converted to control signals by classifying the P300 signal generated

Read more

Summary

Introduction

Brain-Computer Interfaces (BCIs) lie at the intersection of signal processing, machine learning, and robotics systems. To improve the performance of such systems, researchers have actively explored multi-sensor fusion in the past several years Such systems are often termed as hybrid BCI systems and they make control decisions based on the fusion of inputs from various sensors. This paper reviews various applications in which a humanoid is controlled using brain signals for performing a wide variety of applications such as grasping of objects, navigation, telepresence etc.; For each of the applications, we discuss the overview of the application, system design, and results associated with the experiments conducted; in this review, we consider BCI applications which use just EEG signals (discussed in Section 3), applications which use multisensor fusion where in addition to EEG, other sensor inputs are considered for execution of the desired task (Section 4), as well as augmented reality-assisted BCI (Section 5); To the best of our knowledge, this work is the first review on BCI-controlled humanoids.

Brain-Computer Interface
Hybrid BCI
Classification Algorithms
Humanoids
BCI-Controlled Humanoid Applications Using Only EEG
Results
BCI-Controlled Humanoid Applications Using Hybrid BCI
Summary of Applications
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call