Abstract

Electroencephalogram data used in the domain of brain–computer interfaces typically has subpar signal-to-noise ratio and data acquisition is expensive. An effective and commonly used classifier to discriminate event-related potentials is the linear discriminant analysis which, however, requires an estimate of the feature distribution. While this information is provided by the feature covariance matrix its large number of free parameters calls for regularization approaches like Ledoit–Wolf shrinkage. Assuming that the noise of event-related potential recordings is not time-locked, we propose to decouple the time component from the covariance matrix of event-related potential data in order to further improve the estimates of the covariance matrix for linear discriminant analysis. We compare three regularized variants thereof and a feature representation based on Riemannian geometry against our proposed novel linear discriminant analysis with time-decoupled covariance estimates. Extensive evaluations on 14 electroencephalogram datasets reveal, that the novel approach increases the classification performance by up to four percentage points for small training datasets, and gracefully converges to the performance of standard shrinkage-regularized LDA for large training datasets. Given these results, practitioners in this field should consider using our proposed time-decoupled covariance estimation when they apply linear discriminant analysis to classify event-related potentials, especially when few training data points are available.

Highlights

  • A brain–computer interface (BCI) allows a subject to e.g. control a computer program using his or her brain signals, which are often recorded via the electroencephalogram (EEG), as it is non-invasive, requires relatively inexpensive equipment and could be used by a large part of the population (Wolpaw et al 2002)

  • To show the efficacy of our method named timedecoupled covariance estimation, we carefully evaluate its performance on datasets recorded by our lab as well as on public event-related potentials (ERPs) datasets of which most are available in MOABB (Mother of All BCI Benchmarks) (Jayaram and Barachant 2018)

  • To obtain a robust estimation of the between-channel covariance matrix in order to enhance the covariance matrix needed for the calculation of the linear discriminant analysis (LDA) weight vectors and bias

Read more

Summary

Introduction

A brain–computer interface (BCI) allows a subject to e.g. control a computer program using his or her brain signals, which are often recorded via the electroencephalogram (EEG), as it is non-invasive, requires relatively inexpensive equipment and could be used by a large part of the population (Wolpaw et al 2002). To realize control via BCIs, machine learning techniques are key to decode the brain signals in real-time. In addition to the bad signal-to-noise ratio, the machine learning problem is aggravated by the oftentimes small amount of training data available in BCI experiments

Objectives
Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.