Abstract

Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey’s learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules.

Highlights

  • Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks

  • We show that generic neural networks trained with non-probabilistic error-based feedback perform near-optimal probabilistic inference in tasks with both categorical and continuous outputs

  • We investigate whether the time course of error-based learning in generic neural networks is realistic for a non-linguistic animal learning to perform a probabilistic inference task

Read more

Summary

Introduction

Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. The specific form of divisive normalization that individual neurons have to perform differs substantially from task to task It is unclear if probabilistic inference can be implemented in generic neural networks, whose neurons all perform the same type of neurally plausible operation. Our main contribution is to connect generic neural networks to near-optimal probabilistic inference in common psychophysical tasks For these tasks, we analyze the network generalization performance, the efficiency of the networks in terms of the number of neurons needed to achieve a given level of performance, the nature of the emergent probabilistic population code, and the mechanistic insights that can be gleaned from the trained networks. Test g1 Higher s1 c Cue combination g1 Input neurons d Coordinate transformation g2 Input neurons s2 e Kalman filtering

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.