Abstract

In the construction of a Bayesian network from observed data, the fundamental assumption that the variables starting from the same parent are conditionally independent can be met by introduction of hidden node (C.K. Kwoh and D.F. Gillies, 1994). We show that the conditional probability matrices for the hidden node for a triplet, linking three observed nodes, can be determined by the gradient descent method. As in all operational research problems, the quality of the result depends on the ability to locate a feasible solution for the conditional probabilities. C.K. Kwoh and D.F. Gillies (1995) presented a paper in which they detailed the methodologies for estimating the initial values of unobservable variables in Bayesian networks. We present the concept of determining the best conditional matrices as an estimation problem. The discrepancies between the observed and predicted values are mapped into a monotonic function where its gradients are used for adjusting the parameters to be estimated. We present our investigation of choosing among various popular error cost functions for training the networks with hidden nodes and determined that both cross entropy and sum of squared error cost functions work equally well for our implementation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.