Abstract

In this paper we study the problem of estimation of a distribution from data that contain small measurement errors. The only assumption on these errors is that the average absolute measurement error converges to zero for sample size tending to infinity with probability one. In particular we do not assume that the measurement errors are independent with expectation zero. Throughout the paper we assume that the distribution, which has to be estimated, has a density with respect to the Lebesgue-Borel measure. We show that the empirical measure based on the data with measurement error leads to an uniform consistent estimate of the distribution function. Furthermore, we show that in general no estimate is consistent in the total variation sense for all distributions under the above assumptions. However, in case that the average measurement error converges to zero faster than a properly chosen sequence of bandwidths, the total variation error of the distribution estimate corresponding to a kernel density estimate converges to zero for all distributions. In case of a general additive error model we show that this result even holds if only the average measurement error converges to zero. The results are applied in the context of estimation of the density of residuals in a random design regression model, where the residual error is not independent from the predictor.

Highlights

  • AMS 2000 subject classifications: Primary 62G05; secondary 62G20

  • In this paper we study the problem of estimation of a distribution from data that contain small measurement errors

  • The only assumption on these errors is that the average absolute measurement error converges to zero for sample size tending to infinity with probability one

Read more

Summary

Main results

The empirical distribution function is possibly the simplest way to estimate a distribution function. Whenever μ has a density with respect to the Lebesgue-Borel measure the total variation error of the above estimate does not converge to zero Because in this case we have μ({X1,n, . As our theorem shows, it is in general not possible to construct an estimate which is consistent for all densities and all samples satisfying (2.4), even if our sample with measurement errors does not change each time completely when the sample size changes, i.e., if we have given data X1, . If we keep the additivity but drop the diminishing noise condition f can not be estimated, we will not show that in this paper

Estimation of the density of residuals
Proof of Theorem 1
Proof of Theorem 2
Proof of Theorem 3
Proof of Theorem 4

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.