Abstract

The Survey and Review article in this issue is “On the Geometry of Maximum Entropy Problems,” by Michele Pavon and Augusto Ferrante. The article begins by summarizing four classical maximum entropy problems, motivated by applications in physics and statistical data analysis. The first of these applications, Boltzmann's Loaded Dice, should be comprehensible to any SIAM reader, and it neatly sets up the key idea of finding the most probable macrostate. As the authors mention, Edwin Jaynes, the celebrated Bayesian, summarized this endeavor as the requirement to “predict states that can be realized by Nature in the greatest number of ways, while agreeing with your macroscopic information.” Many readers will know that the concept of maximum entropy has a long and checkered history. I first became aware of this when I read the wonderful review by Persi Diaconis [A Frequentist Does This, A Bayesian That, book review, SIAM News, 37 (2) (2004), pp. 6--7] of Jaynes's classic tome [Probability Theory: The Logic of Science, Cambridge University Press, Cambridge, UK, 2003]. I couldn't resist a review with statements like “Jaynes is known as an objective Bayesian, but he has surely written the field's most subjective account,” so I bought the book and learned some more. The authors of this Survey and Review article make it clear that they are not attempting to explain or justify the use of maximum entropy, or to prove existence of solutions. Instead, their aim is to unify and simplify the solution of a large class of these problems, in particular providing analytical forms for the solutions. The inverse problems considered require optimization subject to linear constraints. The authors show that a very wide range of seemingly unrelated types of maximum entropy problem, involving probability distributions, spectral densities, and covariance matrices, can be dealt with by the same geometric principle. This unified approach is applied to some widely studied and high impact problems, including Burg-entropy, Shannon-entropy, Dempster's covariance completion, and Gibbs-like variational principles. Classical and recent results are rederived, and new extensions are discovered. This integrative article will appeal in particular to readers with interests in optimization, statistical physics, information theory, and machine learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call