Abstract

Brain-computer interfaces (BCIs) based on motor imagery (MI) have played an important role and obtained impressive achievement in exercise rehabilitation. However, most of the previous researches focused on the upper limb, while many disabled patients need the same technology to assist their rehabilitation training. It is more difficult to detect lower limb MI than upper limb because of the deeper and smaller corresponding sensorimotor cortex. To solve this problem, a new paradigm is proposed to perform waking imagery (WI) for subjects in a virtual environment (VE) to further enhance their brain activities. Furthermore, to decode WI efficiently when facing the low reliable and limited data, we propose a stacked denoising auto-encoder (SDAE) network, which is trained on multi-view feature obtained from VE. First, the spatial and frequency based features are extracted and fused from the raw data. Second, we use SDAE network to extract the hidden features from the above features. Third, we fuse the previous features and hidden features to train the Softmax classifier. Experimental results on our self-collected data demonstrate that, SDAE network outperforms other deep learning methods in classifying WI in VE.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call