Abstract
This paper presents a general framework for learning with imprecise probabilities, consisting of a hierarchical approach with two sets of parameters. In the top set we have imprecise information, and conditioned on this set we have precise Bayesian information about the other set of parameters. Given a set of observations, the information about both sets of parameters is updated by conditioning, and a model selection method is applied to compute a reduced top set. This model selection method is based on decisions with imprecise probabilities. It will be shown that many existing approaches can be fitted in this general procedure, and a theoretical justification will be provided. Finally, the method will be applied to the problem of learning credal networks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.