Abstract

Learning the structure of Bayesian networks from data is known to be a computationally challenging, NP-hard problem. The literature has long investigated how to perform structure learning from data containing large numbers of variables, following a general interest in high-dimensional applications (“small n, large p”) in systems biology and genetics. More recently, data sets with large numbers of observations (the so-called “big data”) have become increasingly common; and these data sets are not necessarily high-dimensional, sometimes having only a few tens of variables depending on the application. We revisit the computational complexity of Bayesian network structure learning in this setting, showing that the common choice of measuring it with the number of estimated local distributions leads to unrealistic time complexity estimates for the most common class of score-based algorithms, greedy search. We then derive more accurate expressions under common distributional assumptions. These expressions suggest that the speed of Bayesian network learning can be improved by taking advantage of the availability of closed-form estimators for local distributions with few parents. Furthermore, we find that using predictive instead of in-sample goodness-of-fit scores improves speed; and we confirm that it improves the accuracy of network reconstruction as well, as previously observed by Chickering and Heckerman (Stat Comput 10: 55–62, 2000). We demonstrate these results on large real-world environmental and epidemiological data; and on reference data sets available from public repositories.

Highlights

  • IntroductionTo use these expressions to identify two simple yet effective optimisations to speed up structure learning in “big data” settings in which n N. based structure learning algorithms, greedy search, as a function of the number of variables N , of the sample size n, and of the number of parameters |Θ|; 2

  • Bayesian networks (BNs; Pearl 1988) are a class of graphical models defined over a set of random variables X = {X1, . . . , X N }, each describing some quantity of interest, that are associated with the nodes of a directed acyclic graph (DAG) G. (They are often referred to interchangeably.) Arcs in G express direct dependence relationships between the variables in X, with graphical separation in G implying conditional independence in probability

  • We demonstrate the improvements in the speed of structure learning and we discussed in Sects. 4.1 and 4.2 using the MEHRA data set from Vitolo et al (2018), which studied 50 million observations to explore the interplay between environmental factors, exposure levels to outdoor air pollutants, and health outcomes in the English regions of the UK between 1981 and 2014

Read more

Summary

Introduction

To use these expressions to identify two simple yet effective optimisations to speed up structure learning in “big data” settings in which n N. based structure learning algorithms, greedy search, as a function of the number of variables N , of the sample size n, and of the number of parameters |Θ|; 2. Our contributions complement related work on advanced data structures for machine learning applications, which include ADtrees (Moore and Lee 1998), frequent sets (Goldenberg and Moore 2004) and more recently bitmap representations combined with radix sort (Karan et al 2018) Such literature develops a framework for caching sufficient statistics, but concentrates on discrete variables, whereas we work in a more general setting in which data can include both discrete and continuous variables. 4, we will use this new expression to identify two optimisations that can markedly improve the overall speed of learning GBNs and CLGBNs by leveraging the availability of closed-form estimates for the parameters of the local distributions and out-of-sample goodness-of-fit scores.

Computational complexity of greedy search
Tabu search: for up to t0 times:
Revisiting computational complexity
Computational complexity for local distributions
Nodes in discrete BNs
Nodes in GBNs
Nodes in CLGBNs
Computational complexity for the whole BN
CLGBNs
Greedy search and big data
Speeding up low-order regressions in GBNs and CLGBNs
Predicting is faster than learning
Benchmarking and simulations
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call