Brain-machine interfaces (BMIs, or brain-computer interfaces, BCIs) have caused a lot of excitement in the past few years; they promise to make the lame walk, the mute talk, the blind see, and perhaps even to enhance cognition (Serruya and Kahana, 2008). Already, cochlear implants have proven immensely successful at making the deaf hear: over 150,000 completely deaf people can now participate in two-way oral communication with the rest of the hearing world, thanks to multi-electrode stimulation of their cochlear nerves, controlled by compact, even stylish, miniature computers worn behind their ears (Chorost, 2006). Deep-brain stimulators have also been quite successful at modulating aberrant neural activity to alleviate Parkinsonism, chronic pain and tremor, among other disorders (Gross, 2004). Artificial retinas for the blind (Yanai et al., 2007), voices for locked-in patients (Brumberg et al., 2010), and neurally-controlled robotic limbs for amputees (Ojakangas et al., 2006) have been substantially less successful, but progress seems to be accelerating. What is holding them back? Perhaps we are only beginning to appreciate the complexity and dynamics of the neural circuits involved. Motor BMIs to date have been unidirectional, with, for example, neural recordings controlling a robotic limb using only visual feedback. Great advances in usability, dexterity, acceptance, or reduced cognitive load may occur when they include sensory feedback (tactile, temperature, proprioception) delivered directly to the nervous system via electrical stimulation. Even for sensory prostheses and deep-brain stimulators, it may prove useful to continuously monitor neural responses to stimulation, and adjust the stimulation to optimize function or therapeutic benefits. In the future, sensory, motor, and modulatory BMIs are likely to take advantage of a continuous dialog between the nervous system and artificial computational devices. Bridging the large chasm between the present and that closed-loop future will certainly require much basic research using reduced preparations. Mussa-Ivaldi and co-workers have pioneered bidirectional BMIs between simple nervous systems maintained in vitro, and artificial robotic bodies (Reger et al., 2000; Kositsky et al., 2009; Mussa-Ivaldi et al., 2010). These hybrid living/artificial robots (or hybrots, Potter, 2004) are simpler than intact animals, having fewer and more well-defined signals, which are under control of the experimenter. Mussa-Ivaldi and co-workers studied the dynamics of a vestibular circuit in the lamprey brainstem, giving it an artificial body – a small wheeled robot – that was controlled by the lamprey brain's motor output signals. The robot's light sensors were translated (in real-time) into frequency-coded electrical stimuli for the vestibular circuit. By observing the neurally-controlled robot's responses to light input, the dynamical dimension of the neural system could be estimated; that is, the number of free parameters in a set of equations that can accurately predict the system's input-output behavior. Initial attempts to model the system with linear, and then with nonlinear (e.g., 4th-order polynomial) equations proved inadequate. Models in which current output is a function of recent output, i.e., using a simple first-order dynamic component, fared much better at accurately describing the system's behavior, even with fewer free parameters. This points to the dynamical dimension as an important property of neural circuits, which can be estimated in hybrid systems as the difference between the known dimensionality of the artificial component (the robot), and the dimensionality of the whole system, which can be measured. Thanks to their controllability and relative simplicity, artificially embodied in vitro networks provide excellent test beds for studying plasticity mechanisms. Using cortical networks cultured on multi-electrode arrays, several groups have demonstrated that the input-output functions of the networks can be reliably altered by multi-electrode stimulation, to effect desired behavior or normalize aberrant activity patterns (Wagenaar et al., 2005; Novellino et al., 2007; Bakkum et al., 2008; Chiappalone et al., 2008; Marom et al., 2009). It is not hard to imagine that this electrical training and modulation of cortical tissue could form the basis of future adaptive, closed-loop BMIs. The continuous electrical dialog would take advantage of brain plasticity to enhance functionality or merely to allow the user to adjust to the neural interface more quickly and easily. The ideal system also would have “learning” on the artificial side, such as the optimization of a set of nonlinear “force fields” that most effectively map recorded neural activity onto (artificial) motor behavior, or map artificial sensory input (or desired neuromodulation) onto neural stimulation.
Read full abstract