Abstract

Currently, most of the high-performance models for frequency recognition of steady-state visual evoked potentials (SSVEPs) are linear. However, SSVEPs collected from different channels can have non-linear relationship among each other. Linearly combining electroencephalogram (EEG) from multiple channels is not the most accurate solution in SSVEPs classification. To further improve the performance of SSVEP-based brain-computer interface (BCI), we propose a convolutional neural network-based non-linear model, i.e. convolutional correlation analysis (Conv-CA). Different from pure deep learning models, Conv-CA use convolutional neural networks (CNNs) at the top of a self-defined correlation layer. The CNNs function on how to transform multiple channel EEGs into a single EEG signal. The correlation layer calculates the correlation coefficients between the transformed single EEG signal and reference signals. The CNNs provide non-linear operations to combine EEGs in different channels and different time. And the correlation layer constrains the fitting space of the deep learning model. A comparison study between the proposed Conv-CA method and the task-related component analysis (TRCA) based methods is conducted. Both methods are validated on a 40-class SSVEP benchmark dataset recorded from 35 subjects. The study verifies that the Conv-CA method significantly outperforms the TRCA-based methods. Moreover, Conv-CA has good explainability since its inputs of the correlation layer can be analyzed for visualizing what the model learnt from the data. Conv-CA is a non-linear extension of spatial filters. Its CNN structures can be further explored and tuned for reaching a better performance. The structure of combining neural networks and unsupervised features has the potential to be applied to the classification of other signals.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call