Abstract

Gaussian (also known as normal) probability distributions in (\mathbb R^d\) play a central role in statistics and are important in many branches of pure and applied mathematics, statistical physics, and several other fields. In the univariate (d=1) case, the corresponding bell curve is known even to many people outside the sciences. At present, there is much interest in algorithms that generate random vectors (X) in (\mathbb R^d) distributed according to a target Gaussian density function. In the scalar (d=1) case, or when (d>1) but the scalar components of (X) are not correlated, there are several simple efficient algorithms, including the well-known Box--Muller method. In the general case, where (d>1) and there are correlations between the scalar components, the standard approach is to perform a linear change of variables (X = LY) so that the components (Y) are uncorrelated and may be sampled easily. Typically, the matrix (L) is obtained via a Cholesky factorization of the precision matrix of the target distribution, an approach which works well if (d) is not too large (for a laptop if, say, (d < 10^5)). However, many recent applications, including image analysis, spatial statistics, graphical structures, and others, operate with very large values of (d) and the Cholesky algorithm, with an (\mathcalO(d^3)) complexity and (\mathcalO(d^2)) memory requirements, may not be feasible. The Survey and Review paper in this issue, “High-Dimensional Gaussian Sampling: A Review and a Unifying Approach Based on a Stochastic Proximal Point Algorithm” by Maxime Vono, Nicolas Dobigeon, and Pierre Chainais, addresses the problem of obtaining Gaussian samples when the Cholesky factorization is not an option. The paper presents in a unified way and compares many algorithms suggested in different scientific communities. The algorithms may be grouped into two classes. In the first, numerical linear algebra techniques are used to reduce the computational complexity and/or the memory requirements. In the second, the samples are obtained via Markov chain Monte Carlo approaches; surprisingly, the resulting algorithms are very much related to classical iterative methods for the solution of linear systems, including Jacobi, Gauss--Seidel, and SOR. For this reason the paper will be relevant to readers interested in numerical linear algebra. For those whose work requires generating random samples, the paper includes a neat decision tree to choose, in a given application, among the many available algorithms. The authors have also made available software for all the methods considered in their survey.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call