Abstract

An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.

Highlights

  • We show in this article that noisy networks of spiking neurons are in principle able to carry out a quite demanding class of computations: probabilistic inference in general graphical models

  • We demonstrate in computer simulations that the precisely structured neuronal microcircuits enable networks of spiking neurons to solve through their inherent stochastic dynamics a variety of complex probabilistic inference tasks

  • We present several ways how probabilistic inference for a given joint distribution p(z1, . . . ,zK ), that is not required to have the form of a 2nd order Boltzmann distribution (5), can be carried out through sampling from the inherent dynamics of a recurrent network N of stochastically spiking neurons

Read more

Summary

Introduction

We show in this article that noisy networks of spiking neurons are in principle able to carry out a quite demanding class of computations: probabilistic inference in general graphical models. Our sampling based approach shows how an internal model of an arbitrary target distribution p can be implemented by a network of stochastically firing neurons (such internal model for a distribution p, that reflects the statistics of natural stimuli, has been found to emerge in primary visual cortex [3]). This approach requires the presence of stochasticity (noise), and is inherently compatible with experimentally found phenomena such as the ubiquitous trial-to-trial variability of responses of biological networks of neurons

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call