Abstract

Event Abstract Back to Event Complex Bayesian Inference in Neural Circuits using Divisive Normalization A very wide range of computations performed by the nervous system involves a type of probabilistic inference known as marginalization. The goal of marginalization is to recover a distribution, p(x), over a variable x, given a joint distribution over several variables, say x, y, and z, p(x,y,z). This operation comes up in seemingly unrelated computations such as causal reasoning, odor recognition, motor control, visual tracking, coordinate transforms, visual search, decision making, and object recognition, to name just a few. In all cases, marginalization is required to compute the marginal probability, of any particular cause, p(x|o), given a set of observations (o) based on knowledge of the joint distribution, p(x,y,z|o), over all causes given the observations (e.g. what is the probability of smelling coffee given a set of odorants and knowledge of the joint distribution over all possible smells - such as coffee, orange juice, bacon, etc - given the odorants). The question we address here is: how do neural circuits implement such marginalizations? The answer depends on how neurons represent probability distributions. Given the Poisson-like statistics of spike trains, we have recently argued that neurons use what we call linear probabilistic population codes. These codes have the advantage of reducing probabilistic inference such as evidence integration or maximum likelihood estimation to simple linear operations over neural activity. This greatly simplifies both learning and computations in tasks such as multisensory integration, action selection, and accumulation of evidence over time in decision making. Given these computational properties, it would seem important to understand how networks could both perform marginalization and keep the marginal distribution encoded with a linear probabilistic population codes. When all distributions are Gaussian, we can show analytically that perfect marginalization can be achieved with a type of lateral inhibition known as divisive normalization. Moreover, the same nonlinearity works for marginalization over time (as in a Kalman filter) and provides a near optimal solution for inference over discrete classes. We tested our analytical result with simulations of networks of spiking neurons using four types of computations: coordinate transforms for sensory motor transformation, Kalman filters for motor control, explaining away in infants (a.k.a backward masking) and odor recognition. We show that in all cases the use of divisive normalization can lead to near perfect performance, i.e., in all cases the network recovers a posterior distribution encoded by a linear probabilistic population code with only negligible information (Shannon) loss. By contrast, networks which cannot implement either the quadratic non-linearities or the divisive normalization operation perform quite poorly and/or do not yield linear probabilistic population codes. This is a particularly intriguing result because divisive normalization has been reported in numerous neural circuits, from insects to mammals. This normalization had been implicated in gain control, attention and redundancy reduction; our results suggest a much wider role, as a general solution to marginalization with probabilistic population codes. Conference: Computational and systems neuroscience 2009, Salt Lake City, UT, United States, 26 Feb - 3 Mar, 2009. Presentation Type: Poster Presentation Topic: Poster Presentations Citation: (2009). Complex Bayesian Inference in Neural Circuits using Divisive Normalization. Front. Syst. Neurosci. Conference Abstract: Computational and systems neuroscience 2009. doi: 10.3389/conf.neuro.06.2009.03.109 Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters. The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated. Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed. For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions. Received: 02 Feb 2009; Published Online: 02 Feb 2009. Login Required This action requires you to be registered with Frontiers and logged in. To register or login click here. Abstract Info Abstract The Authors in Frontiers Google Google Scholar PubMed Related Article in Frontiers Google Scholar PubMed Abstract Close Back to top Javascript is disabled. Please enable Javascript in your browser settings in order to see all the content on this page.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.