Abstract

Closed-loop decoder adaptation (CLDA) is an emerging paradigm for improving or maintaining the online performance of brain-machine interfaces (BMIs). Here, we present Likelihood Gradient Ascent (LGA), a novel CLDA algorithm for a Kalman filter (KF) decoder that uses stochastic, gradient-based corrections to update KF parameters during closed-loop BMI operation. LGA's gradient-based paradigm presents a variety of potential advantages over other "batch" CLDA methods, including the ability to update decoder parameters on any time-scale, even on every decoder iteration. Using a closed-loop BMI simulator, we compare the LGA algorithm to the Adaptive Kalman Filter (AKF), a partially gradient-based CLDA algorithm that has been previously tested in non-human primate experiments. In contrast to the AKF's separate mean-squared error objective functions, LGA's update rules are derived directly from a single log likelihood objective, making it one step towards a potentially optimal continuously adaptive CLDA algorithm for BMIs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call