Abstract

Bayesian methods provide an objective analysis for problems with incomplete information. Yet, the use of Bayesian methods requires the assigning of an a priori probability, or prior. The prior should be assigned to contain the least information, while at the same time being consistent with the statistical parameters of the problem. To do this correctly, a complete integration of Information Theory with Bayesian Estimation methods is necessary. When finding probability distributions over the probability assignments, traditional methods are not entirely self-consistent. In papers presented at the 1996 MaxEnt Workshop, Larson, Evenson, and Dukes demonstrated that commonly used methods minimize only part of the total information. Minimizing the total information produces an entropic prior. Implementing this complete method to find the best probability distribution over probability assignments for three- to five-sided dice showed that the entropic prior Bayesian method gives results which differ significantly from these standard approaches. This is especially apparent at low dimensions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call