Abstract

When Shannon presents the formula for the entropy of a memoryless source, he presupposes that the prior probabilities of the different source symbols are known. This paper deals with the quantity of information acquired when the prior probabilities of a binary source are learned from a sequence ofN source symbols or Bernoulli trials. Two learning methods are considered: Maximum likelihood estimation of a parameter ϑ by calculation of the relative frequency; and calculation of the posterior probability density for ϑ. For both methods the acquired information behaves as 1/2 logN + const. for largeN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call