Abstract

The planning and control of even simple movements, such as reaching for an object, rely on somatosensory feedback of the state of the limb. Such feedback will be equally important for naturalistic control of neuro-prosthetic devices. For this reason, there has been considerable interest in the development of systems for artificial somatosensory feedback, in particular using electrical microstimulation of the brain. Much of this work has focused on creating “biomimetic” patterns of neural activation, i.e., trying replicate natural sensory-drive activity, however the challenges for this approach remain significant. We have developed a complementary approach, focusing instead on the brain's natural ability to to learn. In particular, we learn to combine somatosensory and visual feedback of the limb in a statistically optimal fashion and to recalibrate the two senses when they come out of alignment. Moreover, computational work from our lab shows that these learning processes can be achieved by simple algorithms, driven only by spatiotemporal correlations between the two sensory signals. We have tested this idea in a demonstration of a novel, learning-based approach to artificial motor feedback. Animals were trained to perform a reaching task under the guidance of visual feedback. They were then exposed to a novel, artificial feedback signal in the form of a non-biomimetic pattern of multielectrode intracortical microstimulation (ICMS). After training with correlated visual and ICMS feedback, the animals were able to perform precise movements with the artificial signal alone. Furthermore, they combine the ICMS signal with vision in a statistically optimal fashion, as would be done for two natural stimuli. This result serves as a proof-of-concept for a learning-based approach to artificial feedback with brain-machine interfaces.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call