Abstract

A dynamic Bayesian network (DBN) is a statistical tool particularly for representing stochastic casual systems using probability and graph theories. The most important procedure in constructing a DBN is selecting the best parameter vector given as conditional probability distribution through a proper learning algorithm. This study presents a novel parameter learning methodology for Markov chain (MC) and hidden Markov model (HMM) DBN using the constrained least square method and the gradient descent optimisation, respectively. The former is employed for satisfying the probability axiom in an MC model and the latter is applied to derive adjustment rules for HMM parameters. The authors primitively assume that an observation probability vector is necessarily predefined prior to applying of the proposed learning algorithm for both models. Simulation experiment is achieved to test their learning algorithm for modelling non-stationary stochastic systems. The authors additionally provide qualitative comparative study with recently addressed learning methodologies of DBN models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call