Abstract

Customary confidence regions do not truly reflect in the majority of our geodetic applications the confidence one can have in one’s produced estimators. As it is common practice in our daily data analyses to combine methods of parameter estimation and hypothesis testing before the final estimator is produced, it is their combined uncertainty that has to be taken into account when constructing confidence regions. Ignoring the impact of testing on estimation will produce faulty confidence regions and therefore provide an incorrect description of estimator’s quality. In this contribution, we address the interplay between estimation and testing and show how their combined non-normal distribution can be used to construct truthful confidence regions. In doing so, our focus is on the designing phase prior to when the actual measurements are collected, where it is assumed that the working (null) hypothesis is true. We discuss two different approaches for constructing confidence regions: Approach I in which the region’s shape is user-fixed and only its size is determined by the distribution, and Approach II in which both the size and shape are simultaneously determined by the estimator’s non-normal distribution. We also prove and demonstrate that the estimation-only confidence regions have a poor coverage in the sense that they provide an optimistic picture. Next to the provided theory, we provide computational procedures, for both Approach I and Approach II, on how to compute confidence regions and confidence levels that truthfully reflect the combined uncertainty of estimation and testing.

Highlights

  • To evaluate the quality of an estimator, it is not uncommon to compute the probability that a certain region, dependent on the estimator, covers the unknown true parameter or alternatively, compute the size of the region that corresponds with a certain preset value of the probability

  • Whether Approach I or Approach II is taken, it is important to realize that the properties of the confidence region are determined by the probabilistic properties of the estimator

  • A critical appraisal was provided on the computation and evaluation of confidence regions and standard ellipses

Read more

Summary

Introduction

To evaluate the quality of an estimator, it is not uncommon to compute the probability that a certain region, dependent on the estimator, covers the unknown true parameter (vector) or alternatively, compute the size of the region that corresponds with a certain preset value of the probability. Such region is called the confidence region/set and the corresponding prob-. – Approach II: One can have the confidence region determined by the contours of the probability density function (PDF) of the estimator for a given confidence level (Hyndman 1996; Gundlich and Koch 2002; Teunissen 2007)

10 Page 2 of 18
Brief review of background theory
Integrated estimation and testing
10 Page 4 of 18
Confidence region
Qx0 x0
Qx0 x0 being distributed as a central
10 Page 8 of 18
10 Page 10 of 18
10 Page 12 of 18
Confidence level
10 Page 14 of 18
Summary and conclusions
10 Page 16 of 18
10 Page 18 of 18
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.