Abstract
Neurons in the brain communicate with spikes, which are discrete events in time and value. Functional network models often employ rate units that are continuously coupled by analog signals. Is there a qualitative difference implied by these two forms of signaling? We develop a unified mean-field theory for large random networks to show that first- and second-order statistics in rate and binary networks are in fact identical if rate neurons receive the right amount of noise. Their response to presented stimuli, however, can be radically different. We quantify these differences by studying how nearby state trajectories evolve over time, asking to what extent the dynamics is chaotic. Chaos in the two models is found to be qualitatively different. In binary networks, we find a network-size-dependent transition to chaos and a chaotic submanifold whose dimensionality expands stereotypically with time, while rate networks with matched statistics are nonchaotic. Dimensionality expansion in chaotic binary networks aids classification in reservoir computing and optimal performance is reached within about a single activation per neuron; a fast mechanism for computation that we demonstrate also in spiking networks. A generalization of this mechanism extends to rate networks in their respective chaotic regimes.7 MoreReceived 7 May 2020Revised 18 March 2021Accepted 23 April 2021DOI:https://doi.org/10.1103/PhysRevX.11.021064Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.Published by the American Physical SocietyPhysics Subject Headings (PhySH)Research AreasFluctuations & noiseNeural encodingNeuronal networksStatistical field theoryTechniquesChaos & nonlinear dynamicsDynamical mean field theoryIsing modelLangevin equationStatistical Physics
Highlights
While biological neurons communicate by spikes, which are discrete all-or-nothing events, artificial neural networks overwhelmingly use continuous-valued units commonly referred to as “rate neurons.” The ramifications of this fundamental distinction between discrete and continuous signaling have been debated concerning learning algorithms [1,2], energy efficiency [3], and information coding [4,5,6,7,8,9,10,11]
Is there a qualitative difference implied by these two forms of signaling? We develop a unified mean-field theory for large random networks to show that first- and second-order statistics in rate and binary networks are identical if rate neurons receive the right amount of noise
It must be flexible enough to enable the use of methods such as disorder averages and replica calculations; techniques that are required to systematically derive mean-field equations that allow us to compare networks on a statistical level and to assess how distances between different dynamical states evolve over time and classification of input signals can be achieved (Fig. 1)
Summary
While biological neurons communicate by spikes, which are discrete all-or-nothing events, artificial neural networks overwhelmingly use continuous-valued units commonly referred to as “rate neurons.” The ramifications of this fundamental distinction between discrete and continuous signaling have been debated concerning learning algorithms [1,2], energy efficiency [3], and information coding [4,5,6,7,8,9,10,11]. Should lead to similar representations to support generalization; the distance between trajectories of data points belonging to the same class should have limited growth [Fig. 1(c), dark orange] This view exposes the tight link to chaos, the sensitivity of the dynamics to initial conditions. Giving up on the statistical match, rate networks with weak noise in their corresponding chaotic regime show a qualitatively different divergence of state trajectories that sensitively depends on the coupling strength (Sec. II F). Given a distribution of input data whose within-class variability is smaller than the average between-class distances [Fig. 1(d), dark orange and green], the dimensionality expansion of presented stimuli by chaotic binary networks leads to a separation that is optimal for classification after topt=τ 1⁄4 2 ln 2 ≃ 1.4 activations per neuron [Figs. Distances between states in binary networks increase transiently in a stereotypical manner, confined to a chaotic submanifold whose dimension depends on the
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.