Abstract

Functional near-infrared spectroscopy (fNIRS) is a promising neuroimaging technology. The fNIRS classification problem has always been the focus of the brain-computer interface (BCI). Inspired by the success of Transformer based on self-attention mechanism in the fields of natural language processing and computer vision, we propose an fNIRS classification network based on Transformer, named fNIRS-T. We explore the spatial-level and channel-level representation of fNIRS signals to improve data utilization and network representation capacity. Besides, a preprocessing module, which consists of one-dimensional average pooling and layer normalization, is designed to replace filtering and baseline correction of data preprocessing. It makes fNIRS-T an end-to-end network, called fNIRS-PreT. Compared with traditional machine learning classifiers, convolutional neural network (CNN), and long short-term memory (LSTM), the proposed models obtain the best accuracy on three open-access datasets. Specifically, in the most extensive ternary classification task (30 subjects) that includes three types of overt movements, fNIRS-T, CNN, and LSTM obtain 75.49%, 72.89%, and 61.94% on test sets, respectively. Compared to traditional classifiers, fNIRS-T is at least 27.41% higher than statistical features and 6.79% higher than well-designed features. In the individual subject experiment of the ternary classification task, fNIRS-T achieves an average subject accuracy of 78.22% and surpasses CNN and LSTM by a large margin of +4.75% and +11.33%. fNIRS-PreT using raw data also achieves competitive performance to fNIRS-T. Therefore, the proposed models improve the performance of fNIRS-based BCI significantly.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call