Abstract

Uncertain reasoning over both continuous and discrete random variables is important for many applications in artificial intelligence. Unfortunately, dealing with continuous variables is not an easy task. In this tutorial, we will study some of the methods and models developed in the literature for this purpose. We will start with the discretization of continuous random variables. A special focus will be made on the numerous issues they raise, ranging from which discretization criterion to use, to the appropriate way of using them during structure learning. These issues will justify the exploitation of hybrid models designed to encode mixed probability distributions. Several such models have been proposed in the literature. Among them, Conditional Linear Gaussian models are very popular. They can be used very efficiently for inference but they lack flexibility in the sense that they impose that the continuous random variables follow conditional Normal distributions and are related to other variables through linear relations. Other popular models are mixtures of truncated exponentials, mixtures of polynomials and mixtures of truncated basis functions. Through a clever use of mixtures of distributions, these models can approximate very well arbitrary mixed probability distributions. However, exact inference can be very time consuming in these models. Therefore, when choosing which model to exploit, one has to trade-off between the flexibility of the uncertainty model and the computational complexity of its learning and inference mechanisms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call