Abstract

The usual statement about participation in an interlaboratory comparison (ILC), seems to be that it only yields a ‘snapshot’ of the performance of a measurement laboratory at a given point in time. It is also said that it is a ‘one shot best performance’, produced by using some excellent measuring system operated by an excellent analyst for the very purpose of showing an optimum performance on that occasion. Let us discuss the case were a result is submitted which is truly representative for the daily operations of each of two participants in an ILC involving many participants. Indeed, practical comparisons of measurement results usually occur between two parties in a context buyer-seller, inspectorinspected, regulator-regulated, accreditor-accredited, etc, almost never involving more than two laboratories. If one or both of the two participants’ results are ‘outlying’ in a comparative graph of all participants, the traditional way of looking at it, is that one measurement result ‘up there’ could be ‘down here’ next time, or inversely. Not a sensible observation, of course. The positions in the graph just are proof that the measurement uncertainty for one or for both of them, as defined in the ISO Guide for the Expresssion of Uncertainty in Measurement (GUM), were too optimistic i.e. incomplete. If it had been realistic i.e. reasonably complete, the measurement uncertainty of the one measurement result would have overlapped the second measurement result (maybe even completely overlapping its measurement uncertainty) and there would not have been a significant discrepancy between the two results. There is no need for frequent participation in an ILC to establish that. It just requires drafting a full (i.e. GUM) measurement uncertainty budget based on expert’s knowledge of the measurement procedures used, then participating once in an ILC and have this ‘confirmed’ by the independent check an ILC provides. Thus, it is somewhat expensive not to know the weaknesses in the type of measurement concerned: good knowledge would have avoided a wrong measurement result (i.e. a result which was too optimistic and displayed in the comparative graph with too small a ‘measurement uncertainty’). No frequent participation in ILCs is needed to arrive at that conclusion, just a well running internal precision control programme to ascertain that repeatability of the measurement capability shown in an ILC comparative graph is maintained. Conclusion from a one time participation only and ‘failing’: a too optimistic measurement uncertainty generates the incorrect conclusion that there are discrepancies between the two laboratories’ measurement results. How then can measurement results of ILCs be interpreted and how should the ILC be exploited? ILCs should offer a reliable i.e. traceable reference quantity value, instead of using the participants’ data to calculate a so-called ‘best’ consensus value, derived from participants’ performance. A reference value in an ILC should be traceable to a metrological reference, or be produced from a reference measurement procedure, agreed upon by the participants before the ILC. Both are external to the participants’ performance rather than derived from it. We can then observe on either of the two participants’ results that

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call