Abstract

Neurons in the brain communicate with spikes, which are discrete events in time and value. Functional network models often employ rate units that are continuously coupled by analog signals. Is there a qualitative difference implied by these two forms of signaling? We develop a unified mean-field theory for large random networks to show that first- and second-order statistics in rate and binary networks are in fact identical if rate neurons receive the right amount of noise. Their response to presented stimuli, however, can be radically different. We quantify these differences by studying how nearby state trajectories evolve over time, asking to what extent the dynamics is chaotic. Chaos in the two models is found to be qualitatively different. In binary networks, we find a network-size-dependent transition to chaos and a chaotic submanifold whose dimensionality expands stereotypically with time, while rate networks with matched statistics are nonchaotic. Dimensionality expansion in chaotic binary networks aids classification in reservoir computing and optimal performance is reached within about a single activation per neuron; a fast mechanism for computation that we demonstrate also in spiking networks. A generalization of this mechanism extends to rate networks in their respective chaotic regimes.7 MoreReceived 7 May 2020Revised 18 March 2021Accepted 23 April 2021DOI:https://doi.org/10.1103/PhysRevX.11.021064Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.Published by the American Physical SocietyPhysics Subject Headings (PhySH)Research AreasFluctuations & noiseNeural encodingNeuronal networksStatistical field theoryTechniquesChaos & nonlinear dynamicsDynamical mean field theoryIsing modelLangevin equationStatistical Physics

Highlights

  • While biological neurons communicate by spikes, which are discrete all-or-nothing events, artificial neural networks overwhelmingly use continuous-valued units commonly referred to as “rate neurons.” The ramifications of this fundamental distinction between discrete and continuous signaling have been debated concerning learning algorithms [1,2], energy efficiency [3], and information coding [4,5,6,7,8,9,10,11]

  • Is there a qualitative difference implied by these two forms of signaling? We develop a unified mean-field theory for large random networks to show that first- and second-order statistics in rate and binary networks are identical if rate neurons receive the right amount of noise

  • It must be flexible enough to enable the use of methods such as disorder averages and replica calculations; techniques that are required to systematically derive mean-field equations that allow us to compare networks on a statistical level and to assess how distances between different dynamical states evolve over time and classification of input signals can be achieved (Fig. 1)

Read more

Summary

INTRODUCTION

While biological neurons communicate by spikes, which are discrete all-or-nothing events, artificial neural networks overwhelmingly use continuous-valued units commonly referred to as “rate neurons.” The ramifications of this fundamental distinction between discrete and continuous signaling have been debated concerning learning algorithms [1,2], energy efficiency [3], and information coding [4,5,6,7,8,9,10,11]. Should lead to similar representations to support generalization; the distance between trajectories of data points belonging to the same class should have limited growth [Fig. 1(c), dark orange] This view exposes the tight link to chaos, the sensitivity of the dynamics to initial conditions. Giving up on the statistical match, rate networks with weak noise in their corresponding chaotic regime show a qualitatively different divergence of state trajectories that sensitively depends on the coupling strength (Sec. II F). Given a distribution of input data whose within-class variability is smaller than the average between-class distances [Fig. 1(d), dark orange and green], the dimensionality expansion of presented stimuli by chaotic binary networks leads to a separation that is optimal for classification after topt=τ 1⁄4 2 ln 2 ≃ 1.4 activations per neuron [Figs. Distances between states in binary networks increase transiently in a stereotypical manner, confined to a chaotic submanifold whose dimension depends on the

Model-independent field theory of neuronal networks
Binary neuron model
Binary networks are always chaotic in the thermodynamic limit
Transition to chaos in finite-size binary networks
Dynamics in binary networks is governed by a chaotic submanifold
Condition for chaos in rate networks
No chaos in rate networks with matched statistics
Residual correlation
Transient of decorrelation
Computation by transient chaotic dimensionality expansion
Generalization to other network models
Rate networks
Spiking networks
LSTM networks
DISCUSSION
Transition to chaos in binary networks at finite size
Correspondence of DMFT in binary and rate networks
Assumptions on connectivity
Relation of the model-independent path-integral formulation to earlier work
Different chaotic dynamics in binary and rate networks
Origin of the difference
Flux tubes in binary and spiking networks
Transient chaotic SNR amplification
Elucidation by dimensionality in binary networks
Relation of chaos and computational power
Other related works
A mechanism of fast computation in spiking networks
Experimental evidence and predictions
Model-independent mean-field theory for random networks
Derivation of the mean-field equation for binary networks
Derivation of the ODE for autocorrelations in binary networks
Flux tubes in binary networks
Equivalence of dynamical mean-field theories of binary and rate networks
Slope of correlation transmission in binary and rate neurons
10. Noisy binary pattern classification task
11. Description of simulations

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.