Abstract

Dynamic Bayesian networks (DBN) are powerful probabilistic representations that model stochastic processes. They consist of a prior network, representing the distribution over the initial variables, and a set of transition networks, representing the transition distribution between variables over time. It was shown that learning complex transition networks, considering both intra- and inter-slice connections, is NP-hard. Therefore, the community has searched for the largest subclass of DBNs for which there is an efficient learning algorithm. We introduce a new polynomial-time algorithm for learning optimal DBNs consistent with a breadth-first search (BFS) order, named bcDBN. The proposed algorithm considers the set of networks such that each transition network has a bounded in-degree, allowing for p edges from past time slices (inter-slice connections) and k edges from the current time slice (intra-slice connections) consistent with the BFS order induced by the optimal tree-augmented network (tDBN). This approach increases exponentially, in the number of variables, the search space of the state-of-the-art tDBN algorithm. Concerning worst-case time complexity, given a Markov lag m, a set of n random variables ranging over r values, and a set of observations of N individuals over T time steps, the bcDBN algorithm is linear in N, T and m; polynomial in n and r; and exponential in p and k. We assess the bcDBN algorithm on simulated data against tDBN, revealing that it performs well throughout different experiments.

Highlights

  • Bayesian networks (BN) represent, in an efficient and accurate way, the joint probability of a set of random variables [1]

  • We propose a generalization of the tDBN algorithm, considering Dynamic Bayesian networks (DBN) such that each transition network is consistent with the order induced by the breadth-first search (BFS) order of the optimal branching of the tDBN network, that we call Bayesian network is called BFS-consistent k-graph (bcDBN)

  • We assess the merits of the proposed algorithm comparing it with one state-of-the-art DBN learning algorithm, tDBN [13]

Read more

Summary

Introduction

Bayesian networks (BN) represent, in an efficient and accurate way, the joint probability of a set of random variables [1]. DBNs consist of a prior network, representing the distribution over the initial attributes, and a set of transition networks, representing the transition distribution between attributes over time. They are used in a large variety of applications such as protein sequencing [3], speech recognition [4] and clinical forecasting [5]. In a score-based approach, a scoring criterion is considered, which measured how well the network fits the data [6,7,8,9,10] In this case, learning a BN reduces to the problem of finding the network that maximizes the score, given the data. Not taking into account the acyclicity constraints, it was proved that learning

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call