Abstract

The Metropolis-Hastings (MH) algorithm, in addition to its application for Markov Chain Monte Carlo sampling or simulation, has been popularly used for constructing a random walk that achieves a given, desired stationary distribution over a graph. Applications include crawling-based sampling of large graphs or online social networks, statistical estimation or inference from massive scale of networked data, efficient searching algorithms in unstructured peer-to-peer networks, randomized routing and movement strategies in wireless sensor networks, to list a few. Despite its versatility, the MH algorithm often causes self-transitions of its resulting random walk at some nodes, which is not efficient in the sense of the Peskun ordering — a partial order between off-diagonal elements of transition matrices of two different Markov chains, and in turn results in deficient performance in terms of asymptotic variance of time averages and expected hitting times with slower speed of convergence. To alleviate this problem, we present simple yet effective distributed algorithms that are guaranteed to improve the MH algorithm over time when running on a graph, and eventually reach ‘efficiency-optimality’, while ensuring the same desired stationary distribution throughout.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.