Abstract

In the field of brain–computer interface (BCI), brain decoding using electroencephalography (EEG) is an essential direction, and motion imagery EEG-based BCI can not only help rehabilitation of patients with physical disabilities, but also enhance the endurance and power of people. Most of the existing MI-based BCI studies are limited to discrete EEG classification or 3-D directional limb trajectory reconstruction. To suitable for the requirements of BCI systems in practical applications, here, we explored the decoding of trajectories of continuous nondirectional motion imagination in 3-D space based on Chinese sign language. We propose a motion imagery trajectory reconstruction Transformer (MITRT) model to decode the EEG signals of the subjects performing motion imagery, and obtain the positional changes in the 3-D space of the shoulder, elbow, and wrist skeleton points in the neural activity. We add the geometric constraint features of upper limb skeleton points to the model, and the MITRT decoding model can obtain prior knowledge to improve the reconstruction accuracy of spatial positions. To verify the decoding performance of our proposed model, we collected motor imagery (MI) EEG signals of 20 subjects based on Chinese sign language for experiments. The experimental results show that the average Pearson correlation coefficient of the six skeleton points was 0.975, which was significantly higher than the contrast models. This study is the first attempt to reconstruct multidirectional continuous nondirectional upper limb MI trajectories based on Chinese sign language. The experimental results show that it is feasible to decode and reconstruct imagined 3-D trajectories of human upper limb skeleton points from scalp EEG.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call