Abstract

The space of probability densities is an infinite-dimensional Riemannian manifold, with Riemannian metrics in two flavors: Wasserstein and Fisher--Rao. The former is pivotal in optimal mass transport (OMT), whereas the latter occurs in information geometry---the differential geometric approach to statistics. The Riemannian structures restrict to the submanifold of multivariate Gaussian distributions, where they induce Riemannian metrics on the space of covariance matrices. Here we give a systematic description of classical matrix decompositions (or factorizations) in terms of Riemannian geometry and compatible principal bundle structures. Both Wasserstein and Fisher--Rao geometries are discussed. The link to matrices is obtained by considering OMT and information geometry in the category of linear transformations and multivariate Gaussian distributions. This way, OMT is directly related to the polar decomposition of matrices, whereas information geometry is directly related to the $QR$, Cholesky, spectral, and singular value decompositions. We also give a coherent description of gradient flow equations for the various decompositions; most flows are illustrated in numerical examples. The paper is a combination of previously known and original results. As a survey it covers the Riemannian geometry of OMT and polar decompositions (smooth and linear category), entropy gradient flows, and the Fisher--Rao metric and its geodesics on the statistical manifold of multivariate Gaussian distributions. The original contributions include new gradient flows associated with various matrix decompositions, new geometric interpretations of previously studied isospectral flows, and a new proof of the polar decomposition of matrices based an entropy gradient flow.

Highlights

  • The influence of matrix decompositions in scientific computing cannot be overestimated

  • We show how the geometry of optimal mass transport (OMT) gives rise to the polar decomposition of maps, obtained by Brenier [13]

  • In Wasserstein geometry we start with a natural metric on Diff(Rn) (or GL(n)) and we show that it induces a metric on Dens(Rn) (or P(n))

Read more

Summary

Introduction

The influence of matrix decompositions in scientific computing cannot be overestimated. The subject treating computer algorithms for matrix decompositions, is part of the curriculum of almost every mathematics department. A typical course follows an algorithmic approach, based on algebra, combinatorics, and some analysis. It is possible, far less common, to follow a geometric approach, based on Riemannian geometry and. Polar decomposition, optimal transport, Wasserstein geometry, Otto calculus, entropy gradient flow, Lyapunov equation, information geometry, Fisher–Rao metric, QR decomposition, Iwasawa decomposition, Cholesky decomposition, spectral decomposition, singular value decomposition, isospectral flow, Toda flow, Brockett flow, double bracket flow, orthogonal group, Hessian metric, multivariate Gaussian distribution.

KLAS MODIN
The corresponding left action is given by pushforward
The derivative Dπ can now be written
Since d dt
The evolution of the matrix elements
The gradient flow
Rn αμβμμ
We thereby get
We have
The horizontal distribution is thereby given by
This proves the
Let us now consider the specific case
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.