The Asia Pacific Metrology Program (APMP) and the Asia Pacific Laboratory Accreditation Cooperation (APLAC) joint Proficiency Testing (PT) programs for toxic elements such as cadmium (Cd) and lead (Pb) or nutritional elements such as iron (Fe) and zinc (Zn) in food were organized by the Korea Research Institute of Standards and Science (KRISS) with the aim of enhancing the quality of measurement and metrological traceability in various economies of the Asia Pacific region by evaluating the performance with rigorous evaluation. Three APMP-APLAC joint PT programs for elemental analyses were carried out by KRISS sequentially, where candidate certified reference materials (CRMs) were used as the PT materials and metrologically traceable certified reference values (RVs) were used as the PT assigned values for the evaluation of participants' results, which allows reliable evaluation of participant performance. This article describes the operation of the PTs and the overall performance of the participating laboratories. The effectiveness of these joint PT programs and trends in PT performance assessment are also discussed. These PT programs confirm the significant importance of using the metrologically traceable RVs instead of the consensus values from participants as the PT assigned value for reliable assessment. The lack of understanding of the concept of coverage factor, degree of freedom, standard uncertainty, and expanded uncertainty was revealed by some participants in these PT programs. Interpreting the zeta-scores or En scores, which are derived by using measurement uncertainties, in conjunction with the z-scores is highly meaningful for assessing participants' ability in measurement capabilities and measurement uncertainty evaluation. Assessment of participants' performance considering measurement uncertainties helps the participants to check how reasonable their measurement uncertainty estimation was. The results of PTs also demonstrated that these PT programs are useful for improving the measurement capability of the laboratories, whereas more capability-building in uncertainty evaluation is required for further improvement.
Read full abstract