Abstract

Brain-computer interfaces (BCIs) based on Steady-State Visual Evoked Potentials (SSVEPs) has been attracting much attention because of its high information transfer rate and little user training. However, most methods applied to decode SSVEPs are limited to CCA and some extended CCA-based methods. This study proposed a comparing network based on Convolutional Neural Network (CNN), which was used to learn the relationship between EEG signals and the templates corresponding to each stimulus frequency of SSVEPs. This novel method incorporated prior knowledge and a spatial filter (task related component analysis, TRCA) to enhance detection of SSVEPs. The effectiveness of the proposed method was validated by comparing it with the standard CCA and other state-of-the art methods for decoding SSVEPs (i.e., CNN and TRCA) on the actual SSVEP datasets collected from 17 subjects. The comparison results indicated that the CNN-based comparing network significantly could significantly improve the classification accuracy compared with the standard CCA, TRCA and CNN. Furthermore, the comparing network with TRCA achieved the best performance among three methods based on comparing network with the averaged accuracy of 84.57% (data length: 2s) and 70.21% (data length: 1s). The study validated the efficiency of the proposed CNN-based comparing methods in decoding SSVEPs. It suggests that the comparing network with TRCA is a promising methodology for target identification of SSVEPs and could further improve the performance of SSVEP-based BCI system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call