Abstract

This chapter discusses the structure of learning parameters that consists of learning the directed acyclic graph (DAG) in a Bayesian network from data. It aids in understanding a DAG that satisfies the Markov condition with the probability distribution Pthat is generating the data. This process of learning such a DAG, called model selection, is detailed. It concerns score-based and constraint-based structure learning. Score-based structure learning discusses two scores---namely, the Bayesian score and the Bayesian information criterion (BIC) score. Once DAG is learned from data, one can learn the parameters. The result will be a Bayesian network that can be used to do inference for the next case. The chapter presents an example to illustrate the technique. It further examines the constraint-based approach by showing how to learn a DAG faithful to a probability distribution. This is followed by a discussion of embedded faithfulness. Structure learning is applied to infer causal influences from data. When large amounts of data are not there, ordinarily a unique DAG cannot be learned. When there are a large number of variables, it is necessary to do approximate structure learning. Finally, it presents learning packages that implement the methods discussed.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.