Abstract
The ability to efficiently perform probabilistic inference task is critical to large scale applications in statistics and artificial intelligence. Dramatic speedup might be achieved by appropriately mapping the current inference algorithms to the parallel framework. Parallel exact inference methods still suffer from exponential complexity in the worst case. Approximate inference methods have been parallelized and good speedup is achieved. In this paper, we focus on a variant of Belief Propagation algorithm. This variant has better convergent property and is provably convergent under certain conditions. We show that this method is amenable to coarse-grained parallelization and propose techniques to optimally parallelize it without sacrificing convergence. Experiments on a shared memory systems demonstrate that near-ideal speedup is achieved with reasonable scalability.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.