Abstract

The bias and variance of traditional parameter estimators are parameter-dependent quantities. The maximum likelihood estimate (MLE) can be defined directly on a family of distributions P and so is parameter-free. The parameter-invariance property of the MLE can be described by the fact that the MLE for the original parameter and the MLE for any reparametrization name the same distribution. We define parameter-free estimators to be P-valued random variables rather than parameter-valued random variables. The Kullback–Leibler (KL) risk is decomposed into two parameter-free quantities that describe the variance and squared bias of the estimator. We show that for exponential families the P-valued MLE is unbiased. We define the KL mean K of a P-valued random variable and show how K describes the long-run properties of this random distribution. For most families P, the KL mean of any P-valued random variable will not lie in P so that we define another mean M, called the distribution mean that is related to K and is an element of P. By allowing the distribution estimator to take values outside of P, the KL mean can be made to lie in P. We compare the MLE to non-P-valued estimators that have been suggested for the Hardy–Weinberg model. Results for the dual KL risk are also given.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call