Abstract
Improved mean-field techniques are a central theme of statistical physics methods applied to inference and learning. We revisit here some of these methods using high-temperature expansions for disordered systems initiated by Plefka, Georges and Yedidia. We derive the Gibbs free entropy and the subsequent self-consistent equations for a generic class of statistical models with correlated matrices and show in particular that many classical approximation schemes, such as adaptive TAP, expectation-consistency, or the approximations behind the vector approximate message passing algorithm all rely on the same assumptions, that are also at the heart of high-temperature expansions. We focus on the case of rotationally invariant random coupling matrices in the ‘high-dimensional’ limit in which the number of samples and the dimension are both large, but with a fixed ratio. This encapsulates many widely studied models, such as restricted Boltzmann machines or generalized linear models with correlated data matrices. In this general setting, we show that all the approximation schemes described before are equivalent, and we conjecture that they are exact in the thermodynamic limit in the replica symmetric phases. We achieve this conclusion by resummation of the infinite perturbation series, which generalises a seminal result of Parisi and Potters. A rigorous derivation of this conjecture is an interesting mathematical challenge. On the way to these conclusions, we uncover several diagrammatical results in connection with free probability and random matrix theory, that are interesting independently of the rest of our work.
Highlights
1.1 Background and overview of related worksMany inference and learning tasks can be formulated as a statistical physics problem, where one needs to compute or approximate the marginal distributions of single variables in an interacting model
– we show in Sec. 3.4 how we can use these results to derive the Plefka-expanded free entropy for a very broad class of bipartite models, which includes the Generalized Linear Models (GLMs) with correlated data matrices, and the Compressed Sensing problem
The Generalized Approximate Message Passing (GAMP) algorithm [Ran11] was shown in [KMS+12] to be equivalent to the TAP equations, a result that we nd back in Sec. 4.1, while TAP equations were already iterated for Restricted Boltzmann Machines, see [TGM+18]
Summary
3.1 Expectation Consistency, adaptive TAP, and Vector Approximate Message Passing approximations 14. 4.3 Generalized Vector Approximate Message Passing (G-VAMP) for Generalized Linear Models . 5.4 The higher-order moments and their in uence on the diagrammatics in the symmetric model 43
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of Statistical Mechanics: Theory and Experiment
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.