Abstract

The Metropolis algorithm is a fundamental Markov chain Monte Carlo (MCMC) sampling technique, which can be used for drawing random samples from high-dimensional probability distributions (i.e., Gibbs distributions) represented by probabilistic graphical models.The traditional Metropolis algorithm is a sequential algorithm.A classic result regarding the fast convergence of the Metropolis algorithm is: when the Dobrushin-Shlosman 条件 for the Metropolis algorithm is satisfied, the algorithm converges rapidly within $O(n~\log~n)$ step, where $n$ is the number of variables.This paper studies a distributed variant of the Metropolis algorithm, called the local-Metropolis algorithm.We provide an analysis of the correctness and convergence of this new algorithm, and show:the algorithm always converges to the correct Gibbs distribution; moreover, for a natural class of triangle-free probabilistic graphical models, as long as the sameDobrushin-Shlosman 条件 is satisfied, the local-Metropolis algorithm converges within $O(\log~n)$ rounds of distributed computing.Compared to the traditional sequential Metropolis algorithm, this achieves an asymptotically optimal $\Omega(n)$ factor of speed-ups.Concrete applications include the distributed sampling algorithms for graph coloring, hardcore model, and Ising model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call