Abstract
We are used to viewing noise as a nuisance in computing systems. This is a pity, since noise will be abundantly available in energy-efficient future nanoscale devices and circuits. I propose here to learn from the way the brain deals with noise, and apparently even benefits from it. Recent theoretical results have provided insight into how this can be achieved: how noise enables networks of spiking neurons to carry out probabilistic inference through sampling and also enables creative problem solving. In addition, noise supports the self-organization of networks of spiking neurons, and learning from rewards. I will sketch here the main ideas and some consequences of these results. I will also describe why these results are paving the way for a qualitative jump in the computational capability and learning performance of neuromorphic networks of spiking neurons with noise, and for other future computing systems that are able to treat noise as a resource.
Highlights
Quite a number of algorithms and architectures have been proposed for computations with spiking neurons
I want to review here two types of computational applications of networks of spiking neurons with noise: probabilistic inference from knowledge stored in complex probability distributions, and in Section II-E and F, the generation of heuristic solutions for hard computational problems
If we manage to program constraints into the architecture of a network of spiking neurons that are meaningful for the solution of a practically relevant constraint satisfaction problem, and if we find ways of controlling the frequency of network states y during the resulting stochastic dynamics of the network in dependence of the number and importance of satisfied constraints that the network state y represents, we have found a new way of using networks of spiking neurons for purposeful computations
Summary
Quite a number of algorithms and architectures have been proposed for computations with spiking neurons. ; xmi, as for a Boltzmann machine, where xi 1⁄4 1 indicates that neuron i has fired within the time interval 1⁄2t À ; t Note that this binary vector a is not a Markov state of the underlying Markov chain, since that would require that the future firing activity of the network be stochastically independent from its activity before time t À , given state a. I will sketch how one prominent motif of cortical microcircuits, winner-take-all (WTA) circuits [see Fig. 3(b)], that consists of interacting excitatory and inhibitory neurons, provides important advantages for computing and learning in a stochastic context These benefits of clever biological circuit architectures could not be addressed properly in Boltzmann machines, because their restriction to symmetric weights makes them unsuitable for understanding specific computational roles of excitatory and inhibitory neurons in stereotypical microcircuit configurations
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.