Abstract

Motor Imagery (MI)-based Brain–Computer Interfaces (BCIs) have been widely used as an alternative communication channel to patients with severe motor disabilities, achieving high classification accuracy through machine learning techniques. Recently, deep learning techniques have spotlighted the state-of-the-art of MI-based BCIs. These techniques still lack strategies to quantify predictive uncertainty and may produce overconfident predictions. In this work, methods to enhance the performance of existing MI-based BCIs are proposed in order to obtain a more reliable system for real application scenarios. First, the Monte Carlo dropout (MCD) method is proposed on MI deep neural models to improve classification and provide uncertainty estimation. This approach was implemented using Shallow Convolutional Neural Network (SCNN-MCD) and with an ensemble model (E-SCNN-MCD). As another contribution, to discriminate MI task predictions of high uncertainty, a threshold approach is introduced and tested for both SCNN-MCD and E-SCNN-MCD approaches. The BCI Competition IV Databases 2a and 2b were used to evaluate the proposed methods for both subject-specific and non-subject-specific strategies, obtaining encouraging results for MI recognition.

Highlights

  • Deep neural network (DNN) techniques have gained enormous acceptance in the scientific community with respect to other machine learning techniques

  • We investigate with Shallow Convolutional Neural Network (SCNN)-Monte Carlo dropout (MCD) how different uncertainty measures correlated with the predictive accuracy and introduce a threshold method that rejects EEG inputs that produce predictions of very high uncertainty in cases when we should not rely on the prediction and minimizes the error rate of the classifier

  • The average of margin of confidence was similar on both subjects; we found a large variation in the mutual information

Read more

Summary

Introduction

Deep neural network (DNN) techniques have gained enormous acceptance in the scientific community with respect to other machine learning techniques. For this reason, DNN is becoming more attractive for various research areas, such as language processing, computer-assisted systems, medical signal processing, and autonomous vehicles, among others. Despite the impressive accuracy of employing DNN-based BCIs, these approaches may produce overconfident predictions. The analysis to quantify the uncertainty of predictions is still a challenge. Overconfident incorrect predictions may be undesirable; an analysis for uncertainty quantification is crucial to guarantee more robust BCIs with reliable responses, and making them suitable for real-life scenarios [2,3]

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call