Abstract

We analyze the relationship between a minimum description length (MDL) estimator (posterior mode) and a Bayes estimator for exponential families. We show the following results concerning these estimators: (a) both the Bayes estimator with Jeffreys (1961) prior and the MDL estimator with the uniform prior with respect to the expectation parameter are nearly equivalent to a bias-corrected maximum-likelihood estimator with respect to the canonical parameter, (b) both the Bayes estimator with the uniform prior with respect to the canonical parameter and the MDL estimator with Jeffreys prior are nearly equivalent to the maximum-likelihood estimator (MLE), which is unbiased with respect to the expectation parameter. These results together suggest a striking symmetry between the two estimators, since the canonical and the expectation parameters of an exponential family form a dual pair from the point of view of information geometry. Moreover, (a) implies that we can approximate a Bayes estimator with Jeffreys prior simply by deriving an appropriate MDL estimator or an appropriate bias-corrected MLE. This is important because a Bayes mixture density with Jeffreys prior is known to be maximin in universal coding.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.