Abstract

Brain-computer interfaces (BCIs) based on Steady-State Visual Evoked Potentials (SSVEPs) has been attracting much attention because of its high information transfer rate and little user training. However, most methods applied to decode SSVEPs are limited to CCA and some extended CCA-based methods. This study proposed a comparing network based on Convolutional Neural Network (CNN), which was used to learn the relationship between EEG signals and the templates corresponding to each stimulus frequency of SSVEPs. The effectiveness of the proposed method is validated by comparing it with the standard CCA and other state-of-the art methods for decoding SSVEPs (i.e., CNN and TRCA) on the actual SSVEP datasets collected from 23 subjects. The comparison results indicate that the CNN-based comparing network can significantly improve the classification accuracy. Furthermore, the comparing network with TRCA achieved the best performance among three methods based on comparing network with the averaged accuracy of 91.24% (data length: 2s) and 86.15% (data length: 1s). The study validated the efficiency of the proposed CNN-based comparing network in decoding SSVEPs. It suggests that the comparing network with TRCA is a promising methodology for target identification of SSVEPs and could further improve the performance of SSVEP-based BCI system.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.