Abstract

This talk will provide an overview of a personal perspective on inference and learning for graphical models, one that began with work on multi-resolution models for signals and images but that has evolved into a more general look at inference and learning especially for graphical models for which these tasks are tractable and scalable to large problems.The talk will begin with a brief introduction to Markov models on undirected graphs and message-passing algorithms, often known as Belief Propagation, that exactly solve inference problems for s on a very special set of graphs, namely those without loops or cycles, i.e., trees. We'll then turn to building or learning models on such graphs, including ones that explicitly have hierarchical structure and will comment on some of the differences between the questions that have typically been addressed in very different communities (namely machine learning and system theory). We'll then provide a new method for learning models on trees with hidden nodes.The rest of the talk will deal with a look at what happens if one considers graphs with loops. We first look at what is known as Loopy Belief Propagation and provide, for the Gaussian case, an explicit picture of what it does and when and why it works and when it doesn't based on what we call walk-sum analysis. We then use these ideas to describe another new set of algorithms based on the graph-theoretic concept of a feedback vertex set (i.e., a set of nodes in the graph that, if removed, leave a cycle-free graph). As time allows we'll discuss the learning of several other classes of graphical models, where in each case, the objective is to learn models for which both the learning of these models as well as exact or nearly exact inference using these models is computational feasible.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call