Abstract

Overview This chapter can be thought of as an extension of the material covered in Chapter 4 which was concerned with how to encode a given state of knowledge into a probability distribution suitable for use in Bayes' theorem. However, sometimes the information is of a form that does not simply enable us to evaluate a unique probability distribution p ( Y | I ). For example, suppose our prior information expresses the following constraint: I ≡ “the mean value of cos y = 0.6.” This information alone does not determine a unique p ( Y | I ), but we can use I to test whether any proposed probability distribution is acceptable. For this reason, we call this type of constraint information testable information . In contrast, consider the following prior information: I 1 ≡ “the mean value of cos y is probably > 0.6.” This latter information, although clearly relevant to inference about Y , is too vague to be testable because of the qualifier “probably.” Jaynes (1957) demonstrated how to combine testable information with Claude Shannon's entropy measure of the uncertainty of a probability distribution to arrive at a unique probability distribution. This principle has become known as the maximum entropy principle or simply MaxEnt. We will first investigate how to measure the uncertainty of a probability distribution and then find how it is related to the entropy of the distribution. We will then examine three simple constraint problems and derive their corresponding probability distributions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call