Abstract

Since the introduction of Dynamic Bayesian Networks (DBNs), their efficiency and effectiveness have increased through the development of three significant aspects: (i) modeling, (ii) learning and (iii) inference. However, no reviews of the literature have been found that chronicle their importance and development over time. The aim of this study is to provide a systematic review of the literature that details the evolution and advancement of DBNs, focusing in the period 1997–2019 that emphasize the aspects of modeling, learning and inference. While the literature presents temporal event networks, knowledge encapsulation, relational and time varying representations as the four predominant DBN modeling approaches, this work groups them as essential techniques within DBNs and help practitioners by associating each to various challenge that arise in pattern discovery and prediction in dynamic processes. Regarding learning, the predominant methods mainly focus on scoring with greedy search. Finally, our study suggests that the main methods used in DBN inference extend or adapt those used in static BNs, and are oriented to either optimize processing time or error rate.

Highlights

  • P Robabilistic Graphical Models (PGMs) use a graphical representation to compactly express probabilistic distributions while at the same time explicitly represent large joint distributions, for transparent evaluation by specialists [1]

  • Dynamic Bayesian Networks (DBNs) are extensions of Bayesian networks to model dynamic processes and consist of a series of time intervals that present the states of all variables at a given time and represent the evolution of a process over time [1]

  • DBNs can be seen as a generalization of Markov Chains and Hidden Markov Models because they represent a space of states in a factorized way instead of as a single discrete random variable [5], and can be classified as (i) directed and (ii) probabilistic

Read more

Summary

Introduction

P Robabilistic Graphical Models (PGMs) use a graphical representation to compactly express probabilistic distributions while at the same time explicitly represent large joint distributions, for transparent evaluation by specialists [1]. Among the different dynamic representative PGMs we have (1) Markov Chains, (2) Hidden Markov Models, (3) Markov Decision Processes (MDPs), (4) Partially Observable MDPs and (5) Dynamic Bayesian Networks (DBNs). DBNs can be seen as a generalization of Markov Chains and Hidden Markov Models because they represent a space of states in a factorized way instead of as a single discrete random variable [5], and can be classified as (i) directed and (ii) probabilistic. All probabilistic network models are represented as graphs to define their structure and with local functions to describe their parameters. It is assumed that a DBN presents the same model at every time t In that sense, this model is dynamic as its parameters vary over time and their distribution is estimated each time a new observation occurs

Objectives
Methods
Findings
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.