Abstract
Numerous experimental data show that human brain can represent probability distributions and perform Bayesian inference. However, it remains unclear how the brain implements probabilistic inference in the form of neural circuits. Several models have been proposed that aim at explaining how the network of neurons carry out maximum a posterior inference (MAP) estimation and marginal inference, but they are all task specific in that they treat MAP estimation and marginal inference separately. In this brief, we propose that human brain could implement MAP estimation and marginal inference in the same network of neurons. We illustrate our result in hidden Markov models and prove that a recurrent neural network (RNN) implementation of belief propagation can be tuned to perform approximate Bayesian inference (to provide posterior or conditional distribution over the latent causes of observations) or identify the MAP or peak of the joint distribution. The key tuning parameter is a temperature parameter that controls the precision of probability distributions that are optimized. Theoretical analyses and experimental results demonstrate that RNNs can carry out near-optimal MAP estimation and marginal inference.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE transactions on neural networks and learning systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.