Abstract

Generalized zero-shot learning (GZSL) has significantly reduced the training requirements for steady-state visual evoked potential (SSVEP) based brain-computer interfaces (BCIs). Traditional methods require complete class data sets for training, but GZSL allows for only partial class data sets, dividing them into 'seen' (those with training data) and 'unseen' classes (those without training data). However, inefficient utilization of SSVEP data limits the accuracy and information transfer rate (ITR) of existing GZSL methods. To this end, we proposed a framework for more effective utilization of SSVEP data at three systematically combined levels: data acquisition, feature extraction, and decision-making. First, prevalent SSVEP-based BCIs overlook the inter-subject variance in visual latency and employ fixed sampling starting time (SST). We introduced a dynamic sampling starting time (DSST) strategy at the data acquisition level. This strategy uses the classification results on the validation set to find the optimal sampling starting time (OSST) for each subject. In addition, we developed a Transformer structure to capture the global information of input data for compensating the small receptive field of existing networks. The global receptive fields of the Transformer can adequately process the information from longer input sequences. For the decision-making level, we designed a classifier selection strategy that can automatically select the optimal classifier for the seen and unseen classes, respectively. We also proposed a training procedure to make the above solutions in conjunction with each other. Our method was validated on three public datasets and outperformed the state-of-the-art (SOTA) methods. Crucially, we also outperformed the representative methods that require training data for all classes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call