Abstract

The archetypal procedure in Type A evaluation of measurement uncertainty involves making <em>n</em> observations of the same quantity, taking the sample figure <em>s</em>² to be an unbiased estimate of the underlying variance and quoting the figure <em>s</em> / sqrt(<em>n</em>) as the relevant standard uncertainty. Although this procedure is theoretically valid when the sample size <em>n</em> is fixed, it is not necessarily valid when <em>n</em> is chosen in response to the growing dataset. In fact, when the experimenter makes observations until a certain level of uncertainty in the mean is reached, the bias in the estimation of the variance can be as much as -45 %. Likewise, the usual nominal 95 % confidence interval can have a level of confidence as low as 88 %. This issue is discussed and techniques are suggested so that Type A evaluation of uncertainty becomes as accurate as is implied. The 'objective Bayesian' approach to this issue is discussed and an associated unacceptable phenomenon is identified.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call