Abstract

Active inference is a normative framework for explaining behaviour under the free energy principle—a theory of self-organisation originating in neuroscience. It specifies neuronal dynamics for state-estimation in terms of a descent on (variational) free energy—a measure of the fit between an internal (generative) model and sensory observations. The free energy gradient is a prediction error—plausibly encoded in the average membrane potentials of neuronal populations. Conversely, the expected probability of a state can be expressed in terms of neuronal firing rates. We show that this is consistent with current models of neuronal dynamics and establish face validity by synthesising plausible electrophysiological responses. We then show that these neuronal dynamics approximate natural gradient descent, a well-known optimisation algorithm from information geometry that follows the steepest descent of the objective in information space. We compare the information length of belief updating in both schemes, a measure of the distance travelled in information space that has a direct interpretation in terms of metabolic cost. We show that neural dynamics under active inference are metabolically efficient and suggest that neural representations in biological agents may evolve by approximating steepest descent in information space towards the point of optimal inference.

Highlights

  • Active inference is a normative framework for explaining behaviour under the free energy principle, a theory of self-organisation originating in neuroscience [1,2,3,4] that characterises certain systems at steady-state as having the appearance of sentience [5,6]

  • We show that these dynamics approximate natural gradient descent on free energy, a wellknown optimisation algorithm from information geometry that follows the steepest descent of the objective in information space [85]

  • Our results suggest that state-estimation in active inference is a good approximation to natural gradient descent on free energy

Read more

Summary

Introduction

Active inference is a normative framework for explaining behaviour under the free energy principle, a theory of self-organisation originating in neuroscience [1,2,3,4] that characterises certain systems at steady-state as having the appearance of sentience [5,6]. The model encodes how the states external to the agent influence the agent’s sensations Organisms infer their surrounding environment from sensory data by inverting the generative model through minimisation of variational free energy. This corresponds to performing approximate Bayesian inference ( known as variational Bayes) [2,3,8,9,10,11], a standard method in machine learning, or minimising the discrepancy between predictions and sensations [1,12]. Active inference agents show competitive or state-of-the-art performance in a wide variety of simulated environments [34,46,47,56]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call