Abstract
Biological neural communications channels transport environmental information from sensors through chains of active dynamical neurons to neural centers for decisions and actions to achieve required functions. These kinds of communications channels are able to create information and to transfer information from one time scale to the other because of the intrinsic nonlinear dynamics of the component neurons. We discuss a very simple neural information channel composed of sensory input in the form of a spike train that arrives at a model neuron, then moves through a realistic synapse to a second neuron where the information in the initial sensory signal is read. Our model neurons are four-dimensional generalizations of the Hindmarsh-Rose neuron, and we use a model of chemical synapse derived from first-order kinetics. The four-dimensional model neuron has a rich variety of dynamical behaviors, including periodic bursting, chaotic bursting, continuous spiking, and multistability. We show that, for many of these regimes, the parameters of the chemical synapse can be tuned so that information about the stimulus that is unreadable at the first neuron in the channel can be recovered by the dynamical activity of the synapse and the second neuron. Information creation by nonlinear dynamical systems that allow chaotic oscillations is familiar in their autonomous oscillations. It is associated with the instabilities that lead to positive Lyapunov exponents in their dynamical behavior. Our results indicate how nonlinear neurons acting as input/output systems along a communications channel can recover information apparently "lost" in earlier junctions on the channel. Our measure of information transmission is the average mutual information between elements, and because the channel is active and nonlinear, the average mutual information between the sensory source and the final neuron may be greater than the average mutual information at an earlier neuron in the channel. This behavior is strikingly different than the passive role communications channels usually play, and the "data processing theorem" of conventional communications theory is violated by these neural channels. Our calculations indicate that neurons can reinforce reliable transmission along a chain even when the synapses and the neurons are not completely reliable components. This phenomenon is generic in parameter space, robust in the presence of noise, and independent of the discretization process. Our results suggest a framework in which one might understand the apparent design complexity of neural information transduction networks. If networks with many dynamical neurons can recover information not apparent at various waystations in the communications channel, such networks may be more robust to noisy signals, may be more capable of communicating many types of encoded sensory neural information, and may be the appropriate design for components, neurons and synapses, which can be individually imprecise, inaccurate "devices."
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.