Abstract

Conditioning is the generally agreed-upon method for updating a priori knowledge representable by a probability distribution when one learns that an event is certainly true. When one learns that the observation is uncertain, the rule of cross-entropy minimization can be used to handle the updating of a priori probability distribution. This paper examines how to update a priori knowledge which is representable by a random set, when one learns that the observation is representable by another random set. In order to resolve this problem, firstly, for each a priori probability distribution, a conditioning rule to define a posterior probability distribution conditioned on the observed random set is derived from the rule of cross-entropy minimization, this conditioning rule is called as the generalized Jeffrey's rule in this paper. The derived posterior probability is compatible with the observed random set and 'close' to the original priori probability distribution. Secondly, based on the general Jeffrey's rule, a new belief measure conditioned on random set is derived and its relationship with the knowledge updating is discussed. At last, the convergence of the sequence of probability distributions by applying the generalized Jeffrey's rule is discussed in this paper.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.