Abstract

Background: For the nonstationarity of neural recordings in intracortical brain–machine interfaces, daily retraining in a supervised manner is always required to maintain the performance of the decoder. This problem can be improved by using a reinforcement learning (RL) based self-recalibrating decoder. However, quickly exploring new knowledge while maintaining a good performance remains a challenge in RL-based decoders. Methods: To solve this problem, we proposed an attention-gated RL-based algorithm combining transfer learning, mini-batch, and weight updating schemes to accelerate the weight updating and avoid over-fitting. The proposed algorithm was tested on intracortical neural data recorded from two monkeys to decode their reaching positions and grasping gestures. Results: The decoding results showed that our proposed algorithm achieved an approximate 20% increase in classification accuracy compared to that obtained by the non-retrained classifier and even achieved better classification accuracy than the daily retraining classifier. Moreover, compared with a conventional RL method, our algorithm improved the accuracy by approximately 10% and the online weight updating speed by approximately 70 times. Conclusions: This paper proposed a self-recalibrating decoder which achieved a good and robust decoding performance with fast weight updating and might facilitate its application in wearable device and clinical practice.

Highlights

  • In intracortical brain–machine interfaces, neural electrodes are chronically implanted into the cortex to record the neural activity, which is translated into control commands on assistive devices for helping amputees or paralyzed patients restore their motor functions [1,2]

  • The main contributions of this paper are as follows: (1) We proposed a new RLBMI algorithm, TMAGRL, which can overcome the difficulty in combining general transfer learning (TL) with the online-RLBMI by extracting the projected feature space only from the source domain in an unsupervised manner and solves the problems of low learning efficiency and unstable performance in conventional RLBMI

  • This might be the first time TL has been integrated with the RLBMI. (2) We introduced MB and weight updating schemes to the RLBMI to further speed up the weight updating and help mitigate over-fitting

Read more

Summary

Introduction

In intracortical brain–machine interfaces (iBMIs), neural electrodes are chronically implanted into the cortex to record the neural activity, which is translated into control commands on assistive devices for helping amputees or paralyzed patients restore their motor functions [1,2]. Some studies have implemented this method and have achieved good decoding performance; most of them commonly employ supervised learning and train the decoder by mapping the recorded neural activities to some kinematic outputs, such as the real movement trajectory or the movement labels [25,26,27]. For the nonstationarity of neural recordings in intracortical brain–machine interfaces, daily retraining in a supervised manner is always required to maintain the performance of the decoder. This problem can be improved by using a reinforcement learning (RL) based self-recalibrating decoder. Compared with a conventional RL method, our algorithm improved the accuracy by approximately 10% and the online weight updating speed by approximately 70 times

Objectives
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call