Abstract

Brain-Machine Interfaces (BMIs) convert paralyzed people’s neural signals into the command of the neuro-prosthesis. During the subject’s brain control (BC) process, the neural patterns might change across time, making it crucial and challenging for the decoder to co-adapt with the dynamic neural patterns. Kalman Filter (KF) is commonly used for continuous control in BC. However, if the neural patterns become quite different compared with the training data, KF needs a re-calibration session to maintain its performance. On the other hand, Reinforcement Learning (RL) has the advantage of adaptive updating by the reward signal. But it is not very suitable for generating continuous motor states in BC due to the discrete action selection. In this paper, we propose a reinforcement learning-based Kalman filter. We maintain the state transition model of KF for a continuous motor state prediction. At the same time, we use RL to generate the action from the corresponding neural pattern, which is then used as a correction for the state prediction. The RL’s parameters are continuously adjusted by the reward signal in BC. In this way, we could achieve a continuous motor state prediction when the neural patterns have drifted across time. The proposed algorithm is tested on a simulated rat lever-pressing experiment, where the rat’s neural patterns have drifted across days. Compared with pure KF without re-calibration, our algorithm could follow the neural pattern drift in an online fashion and maintain good performance.Clinical Relevance— The proposed method bridges the gap between the online parameter adaptation and the continuous control of the neuro-prosthesis. It is promising to be used in adaptive brain control applications during clinical usage.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call