Abstract

It is well known that, under general regularity conditions, the distribution of the maximum likelihood estimator (MLE) is asymptotically normal. Very recently, bounds of the optimal order $O(1/\sqrt{n})$ on the closeness of the distribution of the MLE to normality in the so-called bounded Wasserstein distance were obtained [2, 1], where $n$ is the sample size. However, the corresponding bounds on the Kolmogorov distance were only of the order $O(1/n^{1/4})$. In this paper, bounds of the optimal order $O(1/\sqrt{n})$ on the closeness of the distribution of the MLE to normality in the Kolmogorov distance are given, as well as their nonuniform counterparts, which work better in tail zones of the distribution of the MLE. These results are based in part on previously obtained general optimal-order bounds on the rate of convergence to normality in the multivariate delta method. The crucial observation is that, under natural conditions, the MLE can be tightly enough bracketed between two smooth enough functions of the sum of independent random vectors, which makes the delta method applicable. It appears that the nonuniform bounds for MLEs in general have no precedents in the existing literature; a special case was recently treated by Pinelis and Molzon [20]. The results can be extended to $M$-estimators.

Highlights

  • Let us begin with the following quote from Kiefer [9] of 1968: a second area of what seem to me important problems to work on has to do with the fact that we do have, in many settings, quite a good large sample theory, but we don’t know how large the sample sizes have to be for that theory to take hold

  • I’m sure most of you are familiar with the error estimate one can give for the classical central-limit theorem, which goes by the name of the BerryEsseen estimate, and which tells you that under certain assumptions one can give an explicit bound on the departure from the normal distrib√ution of the sample mean for a given sample size, the error term being of order 1/ n

  • The new method yields √uniform bounds of the optimal order O(1/ n) on the closeness of the distribution of the maximum likelihood estimator (MLE) to normality and their so-called nonuniform counterparts, which work much better for large deviations, that is, in tail zones of the distribution of the MLE – which are usually of foremost interest in statistical tests. Such nonuniform bounds for MLEs in general appear to have no precedents in the existing literature

Read more

Summary

Introduction

Let us begin with the following quote from Kiefer [9] of 1968:. a second area of what seem to me important problems to work on has to do with the fact that we do have, in many settings, quite a good large sample theory, but we don’t know how large the sample sizes have to be for that theory to take hold. Can you give explicitly some useful bound on the departure from the asymptotic normal distribution as a function of the sample size n? In the rather common special case when the MLE θis expressible as a smooth enough function of a linear statistic of independent identically distributed (i.i.d.) observations, the bounds obtained in [2] were sharpened and simplified in [1] by using a version of the delta method. It was assumed in [1] that q(θ) = 1 n n g(Xi), i=1. Where q : Θ → R is a twice continuously differentiable one-to-one mapping, g : R → R is a Borel-measurable function, and the Xi’s are i.i.d. real-valued r.v.’s

Pinelis
General setting
Making the bracketing work
Bounding the remainder
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call