Abstract
We focus on the problem of embedding deterministic equations in discrete Bayesian networks. This is typically achieved by a discretization of both input and output variables and a degenerate quantification of the corresponding conditional probability table. We note that, generally speaking, this approach based on the classical Bayesian theory of probability cannot properly take into account the information loss induced by the discretization. Consequently, we propose a conservative modeling of such epistemic uncertainty by means of credal sets, i.e., sets of probability mass functions. This approach transforms the original Bayesian network in a credal network, returning interval-valued inferences, that are robust with respect to the discretization. Procedures for optimal discretization in this setup are discussed. The proposed method can be used for both developing knowledge-based expert systems as well as machine learning based on probabilistic graphical models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.