Many individuals and laboratory teams, respectively, experience some kind of surprise when they are starting to compare their own measurement results with the ones from other laboratories. It could be a positive astonishment, namely how well the results are matching for a given measurand and sample. More often the contrary is observed and the origin of the discrepancies has to be identified. However, the potential of an interlaboratory comparison (ILC) including its evaluation depends strongly on the ILC design and the amount of information provided by the participants. This is not different from any other scientific study. But there are still several myths about the ‘do and don’t’ for ILCs. For instance, opinions such as ‘never combine measurement results from an ILC to assign a property value to a reference material’ or ‘increase the number of ILC participants for better approaching the true value’ are voiced at conferences or in committee meetings. Nevertheless, recent years have seen significant progress in the common understanding of underlying scientific principles and concepts of ILCs as well as in their design and execution. This was and continues to be driven by the needs to demonstrate the analytical capabilities (proficiency) of a set of laboratories, to identify the performance (limits) of a particular measurement method (actually a ‘measurement procedure’) or to characterize a specific property of the material under investigation. In the past, such judgments were left to some individuals or institutions, often based on a non-transparent division into ‘experts’ and ‘non-experts.’ It is interesting to note that the enhanced globalization of science, industry, trade, people’s mobility, communication, etc., has also put the traditional way of defining experts and competences in question — at least partially. Nowadays, it is usually not sufficient anymore to be identified by a well-known scientific ‘heavy weight’ or to belong to a traditionally well-reputed institution. One has regularly to provide evidence for the claimed competence. ILCs can be a relatively independent route for demonstrating specific competencies in measurement tasks. This is increasingly recognized also by regulators. For instance, official control laboratories for food or environmental monitoring in the European Union are required by legislation to successfully participate in dedicated ILCs. A prerequisite for the wider acceptance of ILC as a quality assurance tool was the combination of metrological principles for ILC designs with dedicated elements of standardization and internationally harmonized surveillance approaches via accreditation. This ranges from demanding participation in proficiency testing of laboratories which wish to obtain and keep an accreditation according to ISO/IEC 17025, to ensuring quality criteria for proficiency testing via application of ISO/IEC 17043. The latter document also contains a list of different goals which may be targeted by ILCs. In this respect, it is important to consider the interrelation between the three main components of a measurement exercise: the material (sample under investigation), the method(s) (measurement procedures used including all sample preparation and manipulation steps), and the laboratories participating. One cannot separately assess more than one of these components in the same ILC. For instance, in the framework of the characterization study of a new candidate reference material, the applied measurement methods have to be validated before applying them by laboratories of known (proven) competence. Measurement results obtained in such designed ILCs can be used for assigning property H. Emons (&) Geel, Belgium e-mail: JRC-IRMM-ACQUAL@ec.europa.eu
Read full abstract