This study proposed a novel decoding method called Discriminant Compacted Network (Dis-ComNet), which exploited the advantages of both spatial filtering and deep learning. Specifically, this study enhanced SSVEP features using Global template alignment (GTA) and Discriminant Spatial Pattern (DSP), and then designed a Compacted Temporal-Spatio module (CTSM) to extract finer features. The proposed method was evaluated on a self-collected high-frequency dataset, a public Benchmark dataset and a public wearable dataset. The results showed that Dis-ComNet significantly outperformed state-of-the-art spatial filtering methods, deep learning methods, and other fusion methods. Remarkably, Dis-ComNet improved the classification accuracy by 3.9%, 3.5%, 3.2%, 13.3%, 17.4%, 37.5%, and 2.5% when comparing with eTRCA, eTRCA-R, TDCA, DNN, EEGnet, Ensemble-DNN, and TRCA-Net respectively in the high-frequency dataset. The achieved results were 4.7%, 4.6%, 23.6%, 52.5%, 31.7%, and 7.0% higher than those of eTRCA, eTRCA-R, DNN, EEGnet, Ensemble-DNN, and TRCA-Net, respectively, and were comparable to those of TDCA in Benchmark dataset.The accuracy of Dis-ComNet in the wearable dataset was 9.5%, 7.1%, 36.1%, 26.3%, 15.7% and 4.7% higher than eTRCA, eTRCA-R, DNN, EEGnet, Ensemble-DNN, and TRCA-Net respectively, and comparable to TDCA. Besides, our model achieved the ITRs up to 126.0 bits/min, 236.4 bits/min and 103.6 bits/min in the high-frequency, Benchmark and the wearable datasets respectively. This study develops an effective model for the detection of SSVEPs, facilitating the development of high accuracy SSVEP-BCI systems.
Read full abstract