Abstract

This paper deals with prior uncertainty in the parameter learning procedure in Bayesian networks. In most studies in the literature, parameter learning is based on two well-known criteria, i.e., the maximum likelihood and the maximum a posteriori. In presence of prior information, the literature abounds with situations in which a maximum a posteriori estimate is computed as a desired estimate but in those studies, it does not seem that the viewpoint behind its use is according to a loss function-based viewpoint. In this paper, we recall the maximum a posteriori estimator as the Bayes estimator under the zero-one loss function and criticizing the zero-one loss, we suggest the use of the general Entropy loss function as a useful loss when overlearning and underlearning need serious attention. We take prior uncertainty into account and extend the act of parameter learning for the case when prior information is polluted. Addressing a real world problem, we conduct a simulation procedure to study behavior of the proposed estimates. Finally, in order to seek the effect of changing hyperparameters of a chosen prior on the learning procedure, we carry out a sensitivity analysis w.r.t. some chosen hyperparameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call