In general, a large amount of training data can effectively improve the classification performance of the Steady-State Visually Evoked Potential (SSVEP)-based Brain-Computer Interface (BCI) system. However, it will prolong the training time and considerably restrict the practicality of the system. This study proposed a SSVEP nonlinear signal model based on the Volterra filter, which could reconstruct stable reference signals using relatively small number of training targets by transfer learning, thereby reducing the training cost of SSVEP-BCI. Moreover, this study designed a transfer-extended Canonical Correlation Analysis (t-eCCA) method based on the model to achieve cross-target transfer. As a result, in a single-target SSVEP experiment with 16 stimulus frequencies, t-eCCA obtained an average accuracy of 86.96%±12.87% across 12 subjects using only half of the calibration time, which exhibited no significant difference from the representative training classification algorithms, namely, extended canonical correlation analysis (88.32%±13.97%) and task-related component analysis (88.92%±14.44%), and was significantly higher than that of the classic non-training algorithms, namely, Canonical Correlation Analysis (CCA) as well as filter-bank CCA. Results showed that the proposed cross-target transfer algorithm t-eCCA could fully utilize the information about the targets and its stimulus frequencies and effectively reduce the training time of SSVEP-BCI.
Read full abstract