Abstract

Graphical models use graphs to compactly capture stochastic dependencies amongst a collection of random variables. Inference over graphical models corresponds to finding marginal probability distributions given joint probability distributions. In general, this is computationally intractable, which has led to a quest for finding efficient approximate inference algorithms. We propose a framework for generalized inference over graphical models that can be used as a wrapper for improving the estimates of approximate inference algorithms. Instead of applying an inference algorithm to the original graph, we apply the inference algorithm to a block-graph, defined as a graph in which the nodes are non-overlapping clusters of nodes from the original graph. This results in marginal estimates of a cluster of nodes, which we further marginalize to get the marginal estimates of each node. Our proposed block-graph construction algorithm is simple, efficient, and motivated by the observation that approximate inference is more accurate on graphs with longer cycles. We present extensive numerical simulations that illustrate our block-graph framework with a variety of inference algorithms (e.g., those in the libDAI software package). These simulations show the improvements provided by our framework.

Highlights

  • A graphical model is a probability distribution defined on a graph such that each node represents a random variable, and edges in the graph represent conditional independencies1

  • We show how our block-graph framework improves the marginal distribution estimates computed by current inference algorithms in the literature: BP [7], conditioned belief propagation (CBP) [8], loop corrected belief propagation (LC) [9], [10], tree-structured expectation propagation (TreeEP), iterative join-graph propagation (IJGP) [11], and generalized belief propagation (GBP) [12]–[15]

  • Our block-graph framework is not limited to generalizing BP and we show this in our numerical simulations where we generalize conditioned belief propagation (CBP) and loop corrected belief propagation (LC)

Read more

Summary

INTRODUCTION

A graphical model is a probability distribution defined on a graph such that each node represents a random variable (or multiple random variables), and edges in the graph represent conditional independencies. The underlying graph structure in a graphical model leads to a factorization of the joint probability distribution. Graphical models are used in many applications such as sensor networks, image processing, computer vision, bioinformatics, speech processing, social network analysis, and ecology [1]–[3], to name a few. Inference over graphical models corresponds to finding the marginal distribution ps(xs) for each random variable given the joint probability distribution p(x). It is well known that inference over graphical models is computationally tractable for only a small class of graphical models (graphs with low treewidth [4]), which has led to much work to derive efficient approximate inference algorithms

Summary of Contributions
Paper Organization
BACKGROUND
BLOCK-TREES
Main Algorithm
Optimal Block-Trees
Greedy Algorithms for Finding Optimal Block-Trees
BLOCK-GRAPH
INFERENCE USING BLOCK-GRAPHS
NUMERICAL SIMULATIONS
Findings
Evaluating the Block-Graph Construction Algorithm
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call