Abstract. Classifying Steady-State Visual Evoked Potentials (SSVEPs) is crucial for enhancing the performance of Brain-Computer Interface (BCI) systems. This study introduces a new method for SSVEP classification within BCI frameworks, called the Filter Bank and Fourier Transform Convolutional Neural Network (FBFCNN). The FBFCNN model effectively extracts key harmonic features by first segmenting SSVEP signals into multiple sub-bands through a filter bank. Subsequently, the Fast Fourier Transform (FFT) is applied to convert these signals from the time domain into the frequency domain. The generated spectrum's real and imaginary components are then fed into a convolutional neural network (CNN). This method improves SSVEP feature extraction and increases classification accuracy by fusing the advantages of CNNs with filter banks. Experimental results, derived from publicly available SSVEP datasets, show that the FBFCNN model surpasses traditional techniques in both accuracy and Information Transfer Rate (ITR). Offering a reliable and efficient solution for real-time brain signal decoding, the FBFCNN model represents a significant advancement in SSVEP-based BCI systems.
Read full abstract