Abstract

Objective. Convolutional neural networks (CNNs) have proven successful as function approximators and have therefore been used for classification problems including electroencephalography (EEG) signal decoding for brain–computer interfaces (BCI). Artificial neural networks, however, are considered black boxes, because they usually have thousands of parameters, making interpretation of their internal processes challenging. Here we systematically evaluate the use of CNNs for EEG signal decoding and investigate a method for visualizing the CNN model decision process. Approach. We developed a CNN model to decode the covert focus of attention from EEG event-related potentials during object selection. We compared the CNN and the commonly used linear discriminant analysis (LDA) classifier performance, applied to datasets with different dimensionality, and analyzed transfer learning capacity. Moreover, we validated the impact of single model components by systematically altering the model. Furthermore, we investigated the use of saliency maps as a tool for visualizing the spatial and temporal features driving the model output. Main results. The CNN model and the LDA classifier achieved comparable accuracy on the lower-dimensional dataset, but CNN exceeded LDA performance significantly on the higher-dimensional dataset (without hypothesis-driven preprocessing), achieving an average decoding accuracy of 90.7% (chance level = 8.3%). Parallel convolutions, tanh or ELU activation functions, and dropout regularization proved valuable for model performance, whereas the sequential convolutions, ReLU activation function, and batch normalization components reduced accuracy or yielded no significant difference. Saliency maps revealed meaningful features, displaying the typical spatial distribution and latency of the P300 component expected during this task. Significance. Following systematic evaluation, we provide recommendations for when and how to use CNN models in EEG decoding. Moreover, we propose a new approach for investigating the neural correlates of a cognitive task by training CNN models on raw high-dimensional EEG data and utilizing saliency maps for relevant feature extraction.

Highlights

  • Brain–computer interfaces (BCI) represent a bridge that allows direct communication between the brain and the environ­ment without the need for muscular activity

  • We developed a Convolutional neural networks (CNNs) model to decode the covert focus of attention from EEG event-related potentials during object selection

  • The classification performance and event-related potentials (ERP) analysis of P300 spellers under overt and covert attentional conditions were investigated by Treder and Blankertz [54], who concluded that P1, N1, P2, N2, and P3 components were enhanced in the case of overt attention

Read more

Summary

Introduction

Brain–computer interfaces (BCI) represent a bridge that allows direct communication between the brain and the environ­ment without the need for muscular activity They could be especially beneficial for patients who have lost muscular control, such as in supporting rehabilitation following stroke [10, 28], regaining communication in locked-in patients suffering from amyotrophic lateral sclerosis [19, 39], or serving as assistive devices for people who have sustained spinal cord injuries [22]. The P300 is an attention-dependent ERP comp­onent showing a positive deflection in the ERP waveform, peaking 300 ms after stimulus onset, regardless of the stimulus modality, e.g., visual, auditory, or somatosensory [43] It can be reliably evoked using the oddball paradigm, in which infrequent target ‘relevant’ stimuli are presented among frequent standard ‘irrelevant’ stimuli and is mainly distributed over the midline scalp electrodes (Fz, Cz, Pz), extending to parietal electrodes. Comparing a full set of EEG electrodes to these customized electrodes showed that this significant increase in performance only occurs in the case of overt attention and not in the case of covert attention [6]

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call