Abstract

This chapter discusses the structure of learning parameters that consists of learning the directed acyclic graph (DAG) in a Bayesian network from data. It aids in understanding a DAG that satisfies the Markov condition with the probability distribution Pthat is generating the data. This process of learning such a DAG, called model selection, is detailed. It concerns score-based and constraint-based structure learning. Score-based structure learning discusses two scores---namely, the Bayesian score and the Bayesian information criterion (BIC) score. Once DAG is learned from data, one can learn the parameters. The result will be a Bayesian network that can be used to do inference for the next case. The chapter presents an example to illustrate the technique. It further examines the constraint-based approach by showing how to learn a DAG faithful to a probability distribution. This is followed by a discussion of embedded faithfulness. Structure learning is applied to infer causal influences from data. When large amounts of data are not there, ordinarily a unique DAG cannot be learned. When there are a large number of variables, it is necessary to do approximate structure learning. Finally, it presents learning packages that implement the methods discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call