Abstract

This paper considers the relationship between thermodynamics, information and inference. In particular, it explores the thermodynamic concomitants of belief updating, under a variational (free energy) principle for self-organization. In brief, any (weakly mixing) random dynamical system that possesses a Markov blanket—i.e. a separation of internal and external states—is equipped with an information geometry. This means that internal states parametrize a probability density over external states. Furthermore, at non-equilibrium steady-state, the flow of internal states can be construed as a gradient flow on a quantity known in statistics as Bayesian model evidence. In short, there is a natural Bayesian mechanics for any system that possesses a Markov blanket. Crucially, this means that there is an explicit link between the inference performed by internal states and their energetics—as characterized by their stochastic thermodynamics.This article is part of the theme issue ‘Harmonizing energy-autonomous computing and intelligence’.

Highlights

  • Any object of study must, implicitly or explicitly, be separated from the rest of the universe

  • An interesting aspect of the analysis presented is that it does not commit to a spatial or temporal scale. This is important, as it means that the interpretation of the dynamics of internal states depends upon the scale at which we identify their Markov blanket

  • We started from the simple, but fundamental, condition that a system must remain separable from its environment for an appreciable length of time [31]. On unpacking this notion— using concepts from information geometry and thermodynamics—we found that the states internal to a Markov blanket look as if they perform variational Bayesian inference, optimizing posterior beliefs about the external world

Read more

Summary

Introduction

Any object of study must, implicitly or explicitly, be separated from the rest of the universe. Rearranging equation (4.7), we can express an upper bound (G) on the expected surprise associated with a given trajectory (where the tightness of the bound depends upon the information length): G(μ) ≥ Eq[ (π [τ ]|μ)] = H[q(π [τ ])] G(μ) = Eq[ (η[τ ], π [τ ]) + ln q(η[τ ]|, π [τ ])] This implies that those future dynamics (i.e. choice of q(π [τ ])) that would be least surprising (on average) given current internal states are those that have the lowest risk (i.e. where the predicted trajectory of the external states shows minimal divergence from those at steady state), while minimizing the ambiguity of the association between external states and particular states. We conclude by considering to what extent this anthropomorphic interpretation is licensed by the underlying physics

Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call