Abstract

SUMMARY Sampling from a Markov random field II can be performed efficiently via Monta Carlo methods by simulating a Markov chain that converges weakly to II. We consider a class of local updating dynamics that are reversible with respect to II. It includes the Metropolis algorithm (MA) and the Gibbs sampler (GS). We investigate the speed of weak convergence of these Markov chains in terms of their second-largest eigenvalues in absolute value. We study the general algebraic structure and then the stochastic Ising model in detail. We conclude the following: the GS is faster than locally updating twice by the MA; in the Ising case, the MA is the best at low temperature but the worst at high temperature; there are dynamics faster than the GS at high temperature. The results clear up some intuitive misconceptions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call