Objective. P300s are one of the most studied event-related potentials (ERPs), which have been widely used for brain–computer interfaces (BCIs). Thus, fast and accurate recognition of P300s is an important issue for BCI study. Recently, there emerges a lot of novel classification algorithms for P300-speller. Among them, discriminative canonical pattern matching (DCPM) has been proven to work effectively, in which discriminative spatial pattern (DSP) filter can significantly enhance the spatial features of P300s. However, the pattern of ERPs in space varies with time, which was not taken into consideration in the traditional DCPM algorithm. Approach. In this study, we developed an advanced version of DCPM, i.e. multi-window DCPM, which contained a series of time-dependent DSP filters to fine-tune the extraction of spatial ERP features. To verify its effectiveness, 25 subjects were recruited and they were asked to conduct the typical P300-speller experiment. Main results. As a result, multi-window DCPM achieved the character recognition accuracy of 91.84% with only five training characters, which was significantly better than the traditional DCPM algorithm. Furthermore, it was also compared with eight other popular methods, including SWLDA, SKLDA, STDA, BLDA, xDAWN, HDCA, sHDCA and EEGNet. The results showed multi-window DCPM preformed the best, especially using a small calibration dataset. The proposed algorithm was applied to the BCI Controlled Robot Contest of P300 paradigm in 2019 World Robot Conference, and won the first place. Significance. These results demonstrate that multi-window DCPM is a promising method for improving the performance and enhancing the practicability of P300-speller.
Read full abstract