Abstract

Interpreting uncertain information, a fundamental requirement of many computer vision and pattern recognition systems is commonly supported by models of the uncertainty. Evidence theory, also called Dempster-Shafer theory, is particularly useful for representing and combining uncertain information when a single precise uncertainty model is unavailable. A framework is presented for deriving and transforming evidence-theoretic belief representations of uncertain variables that denote numerical quantities. Belief is derived from probabilistic models using relationships between probability bounds and the support and plausibility functions used in evidence theory. This model-based approach to belief representation is illustrated by an algorithm currently used in a vision system to label anomalous high-intensity pixels in imagery. As the uncertain variables are manipulated to form features and object discriminants, the belief representation of the uncertain variables must be transformed accordingly. Belief transformations, analogous to the transformation of probability-density functions in mappings of random variables, are derived to maintain the same rigorous belief representation for computed quantities. The results demonstrate novel ways to address uncertainty in the use of sensor information, and contribute to understanding of the similarities and distinctions of probability theory and evidence theory.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call