Abstract

Continuous decoding of hand kinematics has been recently explored for the intuitive control of electroencephalography (EEG)-based Brain-Computer Interfaces (BCIs). Deep neural networks (DNNs) are emerging as powerful decoders, for their ability to automatically learn features from lightly pre-processed signals. However, DNNs for kinematics decoding lack in the interpretability of the learned features and are only used to realize within-subject decoders without testing other training approaches potentially beneficial for reducing calibration time, such as transfer learning. Here, we aim to overcome these limitations by using an interpretable convolutional neural network (ICNN) to decode 2-D hand kinematics (position and velocity) from EEG in a pursuit tracking task performed by 13 participants. The ICNN is trained using both within-subject and cross-subject strategies, and also testing the feasibility of transferring the knowledge learned on other subjects on a new one. Moreover, the network eases the interpretation of learned spectral and spatial EEG features. Our ICNN outperformed most of the other state-of-the-art decoders, showing the best trade-off between performance, size, and training time. Furthermore, transfer learning improved kinematics prediction in the low data regime. The network attributed the highest relevance for decoding to the delta-band across all subjects, and to higher frequencies (alpha, beta, low-gamma) for a cluster of them; contralateral central and parieto-occipital sites were the most relevant, reflecting the involvement of sensorimotor, visual and visuo-motor processing. The approach improved the quality of kinematics prediction from the EEG, at the same time allowing interpretation of the most relevant spectral and spatial features.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call