Abstract

Recently, artificial neural networks (ANNs) have been proven effective and promising for the steady-state visual evoked potential (SSVEP) target recognition. Nevertheless, they usually have lots of trainable parameters and thus require a significant amount of calibration data, which becomes a major obstacle due to the costly EEG collection procedures. This paper aims to design a compact network that can avoid the over-fitting of the ANNs in the individual SSVEP recognition. This study integrates the prior knowledge of SSVEP recognition tasks into the attention neural network design. First, benefiting from the high model interpretability of the attention mechanism, the attention layer is applied to convert the operations in conventional spatial filtering algorithms to the ANN structure, which reduces network connections between layers. Then, the SSVEP signal models and the common weights shared across stimuli are adopted to design constraints, which further condenses the trainable parameters. A simulation study on two widely-used datasets demonstrates the proposed compact ANN structure with proposed constraints effectively eliminates redundant parameters. Compared to existing prominent deep neural network (DNN)-based and correlation analysis (CA)-based recognition algorithms, the proposed method reduces the trainable parameters by more than 90% and 80% respectively, and boosts the individual recognition performance by at least 57% and 7% respectively. Incorporating the prior knowledge of task into the ANN can make it more effective and efficient. The proposed ANN has a compact structure with less trainable parameters and thus requires less calibration with the prominent individual SSVEP recognition performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call