Abstract

Best linear unbiased estimators (BLUE’s) are known to be optimal in many respects under normal assumptions. Since variance minimization doesn’t depend on normality and unbiasedness is often considered reasonable, many statisticians have felt that BLUE’s ought to preform relatively well in some generality. The result here considers the general linear model and shows that any measurable estimator that is unbiased over a moderately large family of distributions must be linear. Thus, imposing unbiasedness can not offer any improvement over imposing linearity. The problem was suggested by Hansen (2022), who showed that any estimator unbiased for nearly all error distributions (with finite covariance) must have a variance no smaller than that of the best linear estimator in some parametric sub-family. Specifically, the hypothesis of linearity can be dropped from the classical Gauss-Markov Theorem. This might suggest that the best unbiased estimator should provide superior performance, but the result here shows that the best unbiased regression estimator can be no better than the best linear estimator.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call