Abstract

Tracing technologies back in time to their scientific and mathematical origins reveals surprising connections between the pure pursuit of knowledge and the opportunities afforded by that pursuit for new and unexpected applications. For example,Einstein's desire to eliminate the disparity between electricity and magnetism inMaxwell's equations impelled him to develop the special theory of relativity(Einstein 1922)Einstein 1922 p 41 'The advance in method arises from the fact that the electric and magnetic fields lose their separate existences through the relativity of motion. A field which appears to be purely an electric field, judged from one system, has also magnetic field components when judged from another inertial system.'. His conviction that there should be no privileged inertial frame ofreference Einstein 1922 p 58 'The possibility of explaining the numerical equality of inertia and gravitation by the unity of their nature gives to the general theory of relativity, according to my conviction, such a superiority over the conceptions of classical mechanics, that all the difficulties encountered must be considered as small in comparison with this progress.' further impelled him to utilize the non-Euclidean geometry originally developed by Riemann and others as a purely hypothetical alternative to classical geometry as the foundation for the general theory of relativity. Nowadays, anyone who depends on a global positioning system—which now includes many people who own smart phones—uses a system that would not work effectively without incorporating corrections from both special and general relativity (Ashby 2003).As another example, G H Hardy famously proclaimed his conviction that hiswork on number theory, which he pursued for the sheer love of exploring the beautyof mathematical structures, was unlikely to find any practical applications (Hardy1940)Hardy 1940 pp 135–6 'The general conclusion, surely, stands out plainly enough. If useful knowledge is, as we agreed provisionally to say, knowledge which is likely, now or in the comparatively near future, to contribute to the material comfort of mankind, so that mere intellectual satisfaction is irrelevant, then the great bulk of higher mathematics is useless. Modern geometry and algebra, the theory of numbers, the theory of aggregates and functions, relativity, quantum mechanics—no one of them stands the test much better than another, and there is no real mathematician whose life can be justified on this ground. If this be the test, then Abel, Riemann and Poincaré wasted their lives; their contribution to human comfort was negligible, and the world would have been as happy a place without them.'. Ironically, the famous Rivest, Shamir and Adleman (RSA) algorithm, which currently underpins much of modern cryptography, depends on fundamental ideasfrom number theory (Cormen et al 2001). Finally, the indeterminacy of the quantum states of light, atoms and molecules, asource of great theoretical interest in the first quarter of the last century, is now in theprocess of being harnessed for creating algorithms, and novel computers, that cansolve problems that could not be addressed by current computing devices (Steane1998, Ralph and Pryde 2010). Thus, perhaps we should not be surprised that a focus on whether a three-bodysystem (such as the sun, earth and moon) would remain stable over time ultimatelybecame the basis for a new geometrical way of thinking about nonlinear dynamicalsystems, and that this approach has begun to find practical applications in theunderstanding and control of nervous systems, including novel ideas forbrain–computer interfaces.Classical dynamical systems theory began with the work of Newton on themotion of the planets. He was able to solve a two-body problem, the motion of theearth around the sun (Newton 1687, Chandrasekhar 1995). Finding explicit solutionsfor the slightly more complicated problem of three bodies (for example, the sun,earth and moon) proved to be far more difficult. In the late nineteenth century,Poincaré made significant progress on this problem, introducing a geometric methodof reasoning about solutions to differential equations (Diacu and Holmes 1996).This work had a powerful impact on mathematicians and physicists, and alsobegan to influence biology. In his 1925 book, based on his work starting in 1907,and that of others, Lotka used nonlinear differential equations and concepts fromdynamical systems theory to analyze a wide variety of biological problems,including oscillations in the numbers of predators and prey (Lotka 1925). Althoughlittle was known in detail about the function of the nervous system, Lotka concludedhis book with speculations about consciousness and the implications this might havefor creating a mathematical formulation of biological systems. Much experimentalwork in the 1930s and 1940s focused on the biophysical mechanisms of excitabilityin neural tissue, and Rashevsky and others continued to apply tools and conceptsfrom nonlinear dynamical systems theory as a means of providing a more generalframework for understanding these results (Rashevsky 1960, Landahl and Podolsky1949).The publication of Hodgkin and Huxley's classic quantitative model of the actionpotential in 1952 created a new impetus for these studies (Hodgkin and Huxley1952). In 1955, FitzHugh published an important paper that summarized much ofthe earlier literature, and used concepts from phase plane analysis such asasymptotic stability, saddle points, separatrices and the role of noise to provide adeeper theoretical and conceptual understanding of threshold phenomena (Fitzhugh1955, Izhikevich and FitzHugh 2006). The Fitzhugh–Nagumo equations constitutedan important two-dimensional simplification of the four-dimensional Hodgkin andHuxley equations, and gave rise to an extensive literature of analysis. Many of thepapers in this special issue build on tools directly descended from the analysis of theHodgkin and Huxley equations in FitzHugh and Nagumo's early work.Mathematicians became increasingly interested in biological problems in general,and in the function of the nervous system in particular, during the latter part of thetwentieth century. The natural tool for describing more complex neural systemswhose patterns of activity unfold in time was nonlinear dynamical systems theory.Classic work from such investigators as Kolmogorov, Arnol'd, Moser, Malkin,Andronov, Hopf, Birkhoff, Hartman and others (reviewed in Izhikevich 2006) servedas the basis for understanding the dynamics of neural models such as the coupling ofoscillators for rhythmic behavior, leading to work such as that of Koppell andErmentrout on the lamprey swimming system (Kopell and Ermentrout 1986, 1990),based on earlier models of Cohen et al (1982). Exploration of nonlinear interactionsin neuronal populations, especially those that might be related to vision, led to thedevelopment of the Wilson–Cowan equations in the 1970s (Wilson and Cowan 1972,1973). The advent of increasingly powerful personal computers also made it feasibleto combine theoretical analyses with extensive numerical investigations of nonlineardynamical systems. An important and influential example of such work was thedetailed bifurcation analysis of Morris and Lecar's two-dimensional model ofnonlinear dynamical behavior in the giant muscle fiber of the Pacific barnacle Balanus nubilis (Morris and Lecar 1981), done by Rinzel and Ermentrout in the late1980s (Rinzel and Ermentrout 1989). The mathematical analysis of burstingbehavior based on decomposition of a dynamical system into fast and slowsubsystems, an application of Fenichel's geometric singular perturbation theory(Fenichel 1979, Jones 1995), continues to play an important role. Recent work ondynamical analyses of neurons and neural circuits is described in Izhikevich's recentbook (Izhikevich 2006), which is based in part on his own work in this area. This isa very small glimpse of a much larger literature; these mathematical themes recurthroughout this issue. Practitioners of neural engineering who want to explore thelanguage and role of dynamics further can find accessible introductions to the keyideas in works such as Strogatz (1994) and Izhikevich (2006).In this special issue of Journal of Neural Engineering, we provide a sample ofthe vigor and excitement of the recent developments in the applications of nonlineardynamical systems theory to the understanding and control of the nervous system.Four of the papers demonstrate the power of dynamical systems theory to analyzeand understand neural systems, both in isolation and within a neuromechanicalcontext (Coggan et al 2011, Nadim et al 2011, Spardy et al 2011a, 2011b). One paper focuses on the importance of noise and delay in dynamical systems for control (Milton 2011). Two papers focus on the dynamics of ion channels—in one paper,new approaches for estimating their parameters are described (Meng et al 2011), andin a second, the time courses of sodium ion channels are used to understandconduction block due to high-frequency stimulation (Ackermann et al 2011). Twopapers focus on the use of optimal control theory to develop approaches forunderstanding (deWolf and Eliasmith 2011) and controlling (Nabi and Moehlis2011) the nervous system. Finally, two papers begin to explore longer time scaleneural dynamics through a combination of modeling and experiments, examininghow animals learn to reduce the time required to forage for food at multiple sites (deJong et al 2011), and how the dynamics of the respiratory system change withdevelopment (Fietkiewicz et al 2011).The first four papers of this special issue illustrate the use of dynamical systemstheory to analyze and understand neural circuitry and neuromechanical systems. Thefirst of these papers uses the phase response curve (PRC) of an oscillator, which is aconceptual tool rooted in the analysis of systems of nonlinear differential equationsthat quantifies the effect of internally generated or externally applied perturbationson the phase of an ongoing oscillation. Nadim et al (2011) elegantly apply PRCsand related techniques to shed light on mechanisms for stabilizing the period of aparticular central pattern generator circuit, responsible for the pyloric rhythm in thestomatogastric ganglion of the crab. Although the digestive system of Cancerborealis may seem somewhat removed from the concerns of neural engineers, thissystem has provided the basis for both experimental and theoretical work on the roleof neuromodulation on neural circuitry. Neuromodulators can functionally alter thedynamics of a neural circuit on a moment-to-moment basis, 'carving out' distinctfunctional circuits from a single anatomical circuit (Marder and Thirumalai 2002).Furthermore, a hallmark of central pattern generator (CPG) systems in humans andother animals is a balance of robustness to perturbation and adaptability to changingconditions. Here Nadim et al (2011) focus on robustness, both to intrinsicperturbations such as barrages of irregular synaptic activity (incorporated into adynamical systems model of the circuit as a Poisson input train), as well asperturbing inputs from other rhythmic processes internal to the animal, such as aslow modulatory input from the animal's gastric mill. Experimentally identifiedinhibitory feedback from a particular part of the circuit to a pair of pacemaker cellsappears extraneous at first, inasmuch as suppressing it seems to have no effect on theperiod of the pyloric rhythm. But while the mean period is unaffected by removingthis synapse's effect, the variance of the period shows great sensitivity. Using anexperimental approach that allows them to artificially remove the inhibition—dynamic clamp—and a model that is simplified but based in a principled fashion on the original system, the authors use the structure of the phase response curveswith and without the synapse present to explain the mechanism underlying thisvariance-suppressing inhibitory synapse. Their results can be explained at aconceptual level using phase plane analysis to show that inhibitory synaptic inputand the intrinsic properties of the neuron act to cancel out the changes in phaseinduced by perturbations. These results may have intriguing implications for the roleof inhibition in stabilizing vertebrate nervous systems, as well as artificial neuralnetworks.The second paper in this special issue uses dynamical analysis to shed light onthe dysfunctional activation of peripheral neurons, for instance in paroxysmalattacks of pain or spasticity. Coggan et al (2011) provide insight into changes inaxonal excitability through dynamical analysis of conductance-based models. Theseauthors make elegant use of fast/slow analysis to explain the initiation andtermination of ectopic spiking that may underlie paroxysmal neurologicalsymptoms. They work both with a multi-compartment conductance-based modeland a lower dimensional, single-compartment model based on the Morris–Lecarmodel mentioned above. They find that axonal susceptibility to after-dischargedepends on dynamical properties such as bistability, in which a dynamical systemhas more than one stable attractor for a given set of parameters (in this case, a 'quiet' stable fixed point and an 'active' stable limit cycle). Moreover, they find that the system's behavior depends on geometrical features of the dynamics, such as thedistance between stable and unstable (saddle) fixed points in the phase plane. Basedon their models, they make observations that may have clinical relevance: an axonsusceptible to paroxysmal discharges due to disease or mutation may be able tooperate normally unless an appropriate 'trigger' is encountered, accounting for theintermittency in the phenomenon that is often observed. Moreover, absolute valuesof currents may not be as relevant as relative time scales of underlying currents fordetermining whether paroxysmal discharges will occur.As Coggan et al (2011) observe, 'clinically relevant changes in excitability canbe replicated in surprisingly simple models, and can be explained on the basis of arelatively small number of complex nonlinear interactions'. In what could be atheme for this special issue, they further write 'With increases in computing power,there is less and less practical need to keep models simple. However, as modelsbecome more complicated, they become harder to analyze (in any formalmathematical manner at least) and ultimately harder to understand. Especially whenone's goal is to explain the basis for some well-characterized phenomenon, buildinga model with the minimally sufficient components may be a good approach. Suchmodels afford the best opportunity to apply tools from dynamical systems theory toformally characterize the nonlinear dynamical basis for the phenomenon'.The third and fourth papers in this special issue, by Spardy et al (2011a, 2011b),use dynamical systems theory to understand the underlying dynamics of vertebratelocomotion. Within the theoretical framework of dynamical systems, a centralpattern generator circuit is typically thought of as a stable limit cycle, or isolatedperiodic orbit. From this point of view, it is natural to see the relationship betweenthe central circuit and the peripheral musculo-skeletal system as feed-forward only.However, real neuromechanical systems often include feedback from the peripheryto the central generator, and these interactions may have nontrivial consequences forrhythm generation (Chiel and Beer 1997, Chiel et al 2009). Incorporating peripheralfeedback into a neuromechanical central pattern generator complicates the model,but also can provide a closer match to empirical behavior. As an example, Markin et al (2010) recently showed that fictive locomotion (in an isolated spinal cordpreparation from the cat) occurred only over a narrow range of supra-spinal drivescompared to those supporting normal locomotion in a preparation in which feedbackfrom the limb to the spinal cord was kept intact. The mechanisms by whichlocomotory behavior can be maintained, both under normal conditions and underreduced supraspinal drive (representing the effects of spinal cord injury (SCI)) areimportant building blocks for rehabilitative therapy post-SCI. Spardy et al (2011a,2011b) analyze afferent control in a neuromechanical model of limbed locomotionthat addresses these experimental results directly. Using sophisticated mathematicaltechniques from dynamical systems analysis, such as dissection of the dynamics intofast and slow subsystems, they are able to stitch together the geometry underlyingtransitions between different combinations of flexor/extensor activation andstance/swing phases of limb movement. Their analysis explains how peripheralfeedback extends the range of supraspinal drives for which stable oscillations occur.In fact, they identify qualitatively distinct mechanisms by which the CPG createsrhythmic behavior in the presence and absence of feedback; in one case, the timing isdetermined by release from inhibition, and in the other case, escape from inhibition.In a companion paper, this same group carries out a further analysis of thelocomotory control system, allowing them to resolve the following issue. Asrunning speed increases, the durations of different component phases of the motion(stance/swing) do not change symmetrically. Instead, increase in velocity offorward motion is accomplished by decreasing the stance phase duration while theswing phase duration remains roughly constant. How is it that a CPG built ofsymmetrically related elements (flexor/extensor related rhythm generation neuronsand interneurons), and symmetric descending drive, is able to produce asymmetriccycles? By finding a reduced model that retains the essential dynamical elements ofthe larger computational model, they are able to explain why a CPG or drive asymmetry are not needed to generate asymmetric changes in the duration of the swing/stance phases as running speed increases, as observed experimentally. As part of the analysis, they raise and answer an important question: does the reduced model indeed possess limit cycle dynamics? They are able to prove mathematically(using averaging methods) that the move to a more analytically tractable model hasnot eliminated the main phenomenon of interest, and that a limit cycle does indeedexist in the simplified model.The fifth paper in this special issue, by Milton (2011), extends the classicaldynamical systems approach by incorporating the 'reality' of imperfect actuatorsand sensors, unexpected events in the world, noise and delays. From an engineeringviewpoint, delays can cause deleterious complications within control systems. Forexample, sufficiently long delays in a feedback control loop can be destabilizing.However, the existence of delays in a neural control system is often ignored becauseit complicates mathematical analysis of the system. The coexistence of delays andrandom perturbations (or noise) further complicates system analysis. Milton (2011)gives an elegant review of this topic with applications to human neural controlproblems, for example the stability and response time during postural sway inhealthy adults, visuomotor stabilization of an inverted rigid rod (stick balancing),delay-induced transient oscillations and anticipatory synchronization. As Milton(2011) points out, in order to connect insights from analysis of noisy delayed controlsystems with human neuromotor control, it is essential to understand the nature ofthe cost functions that the nervous system uses to manage its resources.The sixth paper in this special issue, by Meng et al (2011), focuses on theproblem of parameter estimation for models of nerve cells that have multiple ionicconductances. The richness of the dynamics of nerve cells, which are far more thanon/off switches, comes from the many ion channels within their membranes, whichallow them to be spontaneously active, to be bistable (e.g. silent or firing repetitivelybased on inputs), to fire in rhythmic bursts and to have their dynamical propertiesaltered by exogenous neuromodulators. In many extracellular recordings, however,only information about the timing of spikes is available. What can be inferred fromthese data? Meng et al (2011) use the statistical theory of point processes and MonteCarlo methods to infer the parameters for a proposed dynamical model of a nervecell. They show that it is possible to estimate two 'unknown' conductances (gNA and gK) for a standard Hodgkin–Huxley model from two sets of spike train data with twodifferent simulated resting currents, and suggest ways in which this approach couldbe generalized to more complex spike train data.The seventh paper, by Ackermann et al (2011), also focuses on the dynamics ofion channels to shed new light on a controversy concerning the mechanism by whichhigh-frequency stimulation can reversibly block conduction of action potentials inperipheral nerves. High-frequency block (HFB) techniques show promise forremediating the effects of chronic spasticity by preventing pathological peripheralactivity from propagating to central sensation centers. Using the known dynamics ofsodium and potassium channels, and a sensitivity analysis of a dynamicalconduction model, the authors are able to narrow down the likely mechanism bywhich HFB acts to stop signals from propagating into the nervous system.The eighth paper, by Nabi and Moehlis (2011), uses approaches from controltheory to explore the control or remediation of pathophysiological states. In manycases, pathological neurological conditions involve a dynamical component. Forexample, debilitating akinesia and involuntary tremor associated with Parkinson'sdisease (PD) are believed to involve atypical synchronized activity in populations ofneurons in regions such as the subthalamic nucleus within the basal ganglia(Hauptmann and Tass 2010, Rosin et al 2007). Remediation of PD symptoms viadeep brain stimulation (DBS) is limited in that it can only 'force' the system in asmall number of independently tunable directions. How can one most effectivelydesynchronize a high-dimensional dynamical system using an input that is limited toa much smaller dimension? To make things worse, the details of the system for anygiven patient—number, connectivity and physiology of the neurons involved—arelargely unknown. Nabi and Moehlis (2011) take a step toward addressing these issues by considering optimal desynchronizing control for a simple system that retains essential elements of the problem: given three coupled oscillators with atendency to synchronize (that is, a globally attracting stable synchronized state), anda control signal that can only be applied to one of the oscillators, what is the optimaldesynchronizing control signal under a quadratic control penalty? Put another way,with how light a touch is it possible to bring the oscillators away from their preferredsynchronized state by a given amount? The answer depends in part on the formtaken by the coupling between the cells. For instance, if the coupling between thestimulated cell and the others is identical, then from symmetry one can see that nodesynchronizing control signal exists (at least, if the cells are identical).Encouragingly, for reasonable control stimuli, the authors find that control signals ofmodest size can effectively desynchronize the population for a large fraction ofpossible couplings.The ninth paper addresses one of the most ambitious areas within neuralengineering: the application of control theoretic techniques to dynamics of circuitsin the central nervous system. Control of high-dimensional systems (often accessedthrough low-dimensional interventions) and the complexity of interactions intrinsicto central neural circuits makes this problem especially challenging. The paper byDe Wolf and Eliasmith (2011) proposes a general theoretical framework formodeling neural control, the neural optimal control hierarchy (NOCH), relying onconcepts from optimal control theory. Using the Bellman equation, they define keycontrol concepts for trajectories, argue for a motor hierarchy and suggest ways inwhich this could be mapped onto actual neural systems, and apply their frameworkto understand arm reaching, both in normal subjects and in those affected byHuntington's disease and cerebellar injury. They also argue that some of the highlynonlinear responses of motor cortical neurons during reaching behavior are a naturalconsequence of the control problem solved by the neural circuitry, as they havedefined it. Although much experimental work remains to validate their approach,they discuss the likelihood that ideas from control theory could already beincorporated into novel brain–machine interfaces.Finally, the last two papers begin to empirically address two important aspects ofslower time scale dynamics in the nervous system: learning and development. Thepaper by de Jong et al (2011) develops a new paradigm for understanding howanimals respond to spatial problems by studying the paths that rats take amongseveral different food sources. After repeated exposure, the paths that animals takeshorten, suggesting that they are able to find more efficient ways of exploitingresources in their environment. The problem is challenging because the animalsneed to use local cues to determine the nature of the task itself, and then to determinehow to improve their responses over time. If a food source is removed, animals areable to rapidly adjust their routes to continue to take the shortest path among thefood sources. As the authors point out, this task is related to a classic problem incomputational complexity theory, the traveling salesman problem, and may shedlight on how biological systems rapidly obtain good (if not globally optimal)solutions to such problems, as well as creating a new paradigm for understanding themapping of spatial problems within brain areas such as the hippocampus. The paperby Fietkiewicz et al (2011) uses both empirical and modeling studies to understandthe developmental dynamics of the respiratory system. Phase relationships amongmotor neurons driving the respiratory system change early in development, and playan important role in generating stable breathing rhythms. Using cross-correlationtechniques, the authors demonstrate the stage of development at which thesechanges occur, and then create a model that provides some insight into how thischange may be instantiated neurally.These papers also suggest some important directions for future work. As Milton(2011) emphasizes, noise and delays are inherent in the nervous system, the bodyand the environment, and since all have co-evolved, subject to energetic constraints,many aspects of the control of the overall system will not be clear unless thesestochastic processes are properly modeled and understood (Goldwyn et al 2011,Thomas 2011, White et al 2000). Similarly, the two papers by Spardy et al (2011a, 2011b) emphasize that nervous systems are embodied, and that neural dynamics is shaped by the neuromechanical properties of the periphery. The paper by DeWolfand Eliasmith (2011) poses an interesting challenge, suggesting that optimal controltheory and hierarchical structures may provide insight into the nervous system. Itremains to be seen whether this approach can capture the highly recurrent nature ofbiological nervous systems, and the complex spatial and temporal dynamics of itselements. The papers by de Jong et al (2011) and Fietkiewicz et al (2011) suggestthat understanding more about learning and development will provide deep insightsinto the slow dynamics of the nervous system that allow it to adapt over the lifespanof an individual. They pose the challenge of relating local plastic changes to theglobal dynamics of the system.Many of the papers suggest fruitful and potentially novel ways to begin todevelop new brain–computer interfaces based on an understanding of neuraldynamics: extracting parameters for the underlying dynamics using the approachessuggested by Meng et al (2011); using an understanding of the dynamics of ionchannels to develop new protocols for enhancing or blocking signaling in thenervous system, using the approaches suggested by Ackermann et al (2011);exploiting an understanding of the dynamics of neural populations to find thelowest-dimensional driving signals that could synchronize or desynchronizepopulation activity, using the approach of Nabi and Moehlis (2011); recognizing theimportance of appropriately timed inhibition to stabilize a neural circuit in responseto perturbations, using the approach of Nadim et al (2011); and defining rationaltreatments for paroxysmal bursting in axons based on the dynamical analysis ofCoggan et al (2011). Although it may take time, the approaches exemplified bythese papers suggest that future brain–computer interfaces will not only exploitcorrelations between brain activity and sensory input or motor output, but will takefull advantage of the actual transformations performed by the dynamics of thenervous system.References Ackermann D M, Bhadra N, Gerges M and Thomas P J 2011 Dynamics and sensitivity analysis ofhigh-frequency conduction block J. Neural Eng. 8 065007 Ashby N 2003 Relativity in the global positioning system Liv. Rev. Relat. 6 lrr-2003-1 Chandrasekhar S 1995 Newton's Principia for the Common Reader (Oxford: Oxford University Press)chapter 12 Chiel H J and Beer R D 1997 The brain has a body: adaptive behavior emerges from interactions ofnervous system, body and environment Trends Neurosci. 20 553–7 Chiel H J, Ting L H, Ekeberg O and Hartmann M J Z 2009 The brain in its body: motor control andsensing in a biomechanical context J. Neurosci. 29 12807–14 Coggan J S, Ocker G K, Sejnowski T J and Prescott S A 2011 Explaining pathological changes inaxonal excitability through dynamical analysis of conductance-based models J. Neural Eng. 8 065002 Cohen A H, Holmes P J and Rand R H 1982 The nature of the coupling between segmental oscillatorsof the lamprey spinal generator for locomotion: a mathematical model J. Math. Biol. 13 345–69 Cormen T H, Leiserson C E, Rivest R L and Stein C 2001 Introduction to Algorithms 2nd edn(Cambridge: MIT Press) de Jong L W, Gereke B, Martin G M and Fellous J-M 2011 The traveling salesrat: insights into thedynamics of efficient spatial navigation in the rodent J. Neural Eng. 8 065010 DeWolf T and Eliasmith C 2011 The neural optimal control hierarchy for motor control J. Neural Eng. 8 065009 Diacu F and Holmes P 1996 Celestial Encounters: the Origins of Chaos and Stability (Princeton, NJ:Princeton University Press) Einstein A 1922 The Meaning of Relativity (Expanded Princeton Science Library Edition 2005)(Princeton, NJ: Princeton University Press) Fenichel N 1979 Geometric singular perturbation theory for ordinary differential equations J. Differ.Equ. 31 53–98 Fietkiewicz C, Loparo K A and Wilson C G 2011 Drive latencies in hypoglossal motoneurons indicatedevelopmental change in the brainstem respiratory network J. Neural Eng. 8 065011 FitzHugh R 1955 Mathematical models of threshold phenomena in the nerve membrane Bull. Math.Biophys. 17 257–78 Goldwyn J H, Imennov N S, Famulare M and Shea-Brown E 2011 Stochastic differential equationmodels for ion channel noise in Hodgkin–Huxley neurons Phys. Rev. E 83 041908 Hardy G H 1940 A Mathematician's Apology (new edition with a foreword by C P Snow 1969)(Cambridge: Cambridge University Press) Hauptmann C and Tass P A 2010 Restoration of segregated, physiological neuronal connectivity bydesynchronizing stimulation J. Neural Eng. 7 056008 Hodgkin A L and Huxley A F 1952 A quantitative description of membrane current and its applicationto conduction and excitation in nerve J. Physiol. 117 500–44 Izhikevich E M 2006 Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting(Cambridge, MA: MIT Press) Izhikevich E M and FitzHugh R 2006 FitzHugh–Nagumo model Scholarpedia 1 1349 Jones C K R T 1995 Geometric singular perturbation theory Dynamical Systems (Lecture Notes inMathematics vol 1609) pp 44–118 Kopell N and Ermentrout G B 1986 Symmetry and phaselocking in chains of weakly coupledoscillators Commun. Pure Appl. Math. 39 623–60 Kopell N and Ermentrout G B 1990 Phase transitions and other phenomena in chains of coupledoscillators SIAM J. Appl. Math. 50 1014–52 Landahl H D and Podolsky R J 1949 On the velocity of conduction in nerve fibers with saltatorytransmission Bull. Math. Biophys. 11 19–27 Lotka A J 1925 Elements of Physical Biology (Baltimore, MD: Williams and Wilkins) Marder E and Thirumalai V 2002 Cellular, synaptic and network effects of neuromodulation NeuralNetw. 15 479–93 Markin S N, Klishko A N, Shevtsova N A, Lemay M A, Prilutsky B I and Rybak A I 2010 Afferentcontrol of locomotor CPG: insights from a simple neuro-mechanical model Ann. New York Acad.Sci. 1198 21–34 Meng L, Kramer M A and Eden U T 2011 A sequential Monte Carlo approach to estimate biophysicalneural models from spikes J. Neural Eng. 8 065006 Milton J G 2011 The delayed and noisy nervous system: implications for neural control J. Neural Eng. 8 065005 Morris C and Lecar H 1981 Voltage oscillations in the barnacle giant muscle fiber Biophys. J. 35 193–213 Nabi A and Moehlis J 2011 Single input optimal control for globally coupled neuron networks J. Neural Eng. 8 065008 Nadim F, Zhao S, Zhou L and Bose A 2011 Inhibitory feedback promotes stability in an oscillatorynetwork J. Neural Eng. 8 065001 Newton I 1687 Philosophiae Naturalis Principia Mathematica Section XI, propositions LVII–LXIII(London: Royal Society) Ralph T C and Pryde G J 2010 Progress in Optics vol 54, ed E Wolf (New York: Elsevier) pp 209–79(arXiv:1103.6071) Rashevsky N 1960 Mathematical Biophysics: Physico-Mathematical Foundations of Biology vol 1 3rdedn (New York: Dover) pp 375–462 (first edition 1938) Rinzel J and Ermentrout G B 1989 Analysis of neuronal excitability and oscillations Methods inNeuronal Modeling ed C Koch and I Segev (Cambridge, MA: MIT Press) pp 135–69 Rosin B, Nevet A, Elias S, Rivlin-Etzion M, Israel Z and Bergman H 2007 Physiology andpathophysiology of the basal ganglia-thalamo-cortical networks Parkinsonism Relat. Disord. 13 S437–9 Spardy L E, Markin S N, Shevtsova N A, Prilutsky B I, Rybak I A and Rubin J E 2011a A dynamicalsystems analysis of afferent control in a neuromechanical model of locomotion: I. Rhythmgeneration J. Neural Eng. 8 065003 Spardy L E, Markin S N, Shevtsova N A, Prilutsky B I, Rybak I A and Rubin J E 2011b A dynamicalsystems analysis of afferent control in a neuromechanical model of locomotion: II. Phaseasymmetry J. Neural Eng. 8 065004 Steane A 1998 Quantum computing Rep. Prog. Phys. 61 117–73 Strogatz S H 1994 Nonlinear Dynamics and Chaos: with Applications to Physics, Biology, Chemistry,and Engineering (Cambridge, MA: Perseus) Thomas P J 2011 A lower bound for the first passage time density of the suprathresholdOrnstein–Uhlenbeck process J. Appl. Probab. 48 420–34 White J A, Rubinstein J T and Kay A R 2000 Channel noise in neurons Trends Neurosci. 23 131–7 Wilson H R and Cowan J D 1972 Excitatory and inhibitory interactions in localized populations ofmodel neurons Biophys. J. 12 1–24 Wilson H R and Cowan J D 1973 A mathematical theory of the functional dynamics of cortical andthalamic nervous tissue Biol. Cybern. 13 55–80

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call