Abstract

Lower-limb exoskeletons have been used extensively in many rehabilitation applications to assist disabled people with their therapies. Brain–machine interfaces (BMIs) further provide effective and natural control schemes. However, the limited performance of brain signal decoding from lower-limb kinematics restricts the broad growth of both BMI and rehabilitation industry. To address these challenges, we propose an ensemble method for lower-limb motor imagery (MI) classification. The proposed model employs multiple techniques to boost performance, including deep and shallow parts. Traditional wavelet transformation followed by filter-bank common spatial pattern (CSP) employs neurophysiologically reasonable patterns, while multi-head self-attention (MSA) followed by temporal convolutional network (TCN) extracts deeper encoded generalized patterns. Experimental results in a customized lower-limb exoskeleton on 8 subjects in 3 consecutive sessions showed that the proposed method achieved 60.27% and 64.20% for three (MI of left leg, MI of right leg, and rest) and two classes (lower-limb MI vs. rest), respectively. Besides, the proposed model achieves improvements of up to 4% and 2% accuracy for the subject-specific and subject-independent modes compared to the current state-of-the-art (SOTA) techniques, respectively. Finally, feature analysis was conducted to show discriminative brain patterns in each MI task and sessions with different feedback modalities. The proposed models integrated in the brain-actuated lower-limb exoskeleton established a potential BMI for gait training and neuroprosthesis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call