Abstract

The validity of calibration and measurement capability (CMC) claims by national metrology institutes is supported by the results of international measurement comparisons. Many methods of comparison analysis are described in the literature and some have been recommended by CIPM Consultative Committees. However, the power of various methods to correctly identify biased results is not well understood. In this work, the statistical power and confidence of some methods of interest to the CIPM Consultative Committees were assessed using synthetic data sets with known properties. Our results show that the common mean model with largest consistent subset delivers the highest statistical power under conditions likely to prevail in mature technical fields, where most participants are in agreement and CMC claims can reasonably be supported by the results of the comparison. Our approach to testing methods is easily applicable to other comparison scenarios or analysis methods and will help the metrology community to choose appropriate analysis methods for comparisons in mature technical fields.

Highlights

  • Metrology 2021, 1, 52–73. https://The CIPM Mutual Recognition Arrangement (MRA) [1] is the framework through which national metrology institutes (NMIs) demonstrate the equivalence of their measurement standards and the calibration and measurement certificates they issue

  • We examined the following methods: the common mean model method [4], the common mean model with largest consistent subset [5], the common mean model with cut-off weighting [17], the common mean model with exclusion of obvious outliers [17], the fixed-effects model with a weighted mean [6], the fixed-effects model with Bayesian model averaging [7], the random-effects model with the method of Mandel and Paule to achieve consistency [16], two other methods for random-effects models implemented by the NIST Consensus Builder, DerSimonian–Laird and Hierarchical Bayesian and the Linear Pool method implemented by the NIST Consensus Builder [15,18]

  • Even when all participants in a comparison submit results that are free from unacknowledged systematic errors, the measurements are still affected by other errors and the evaluation of equivalence in Equation (11) may incorrectly determine that a participant is biased

Read more

Summary

Introduction

The CIPM Mutual Recognition Arrangement (MRA) [1] is the framework through which national metrology institutes (NMIs) demonstrate the equivalence of their measurement standards and the calibration and measurement certificates they issue. The main way that NMIs support CMC claims is to participate in international measurement comparisons. Each DoE is a measure of the difference between the participant laboratory’s measurement and the comparison reference value, which may be expected to reflect the corresponding consistency between participants’ national standards. In this way, users of the CMC database may have a reasonable expectation that calibration of an artefact would produce equivalent results, to within the claimed expanded uncertainties, when carried out in different economies

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call