Abstract

A metrological traceability chain [1] has an enormous value: it shows where the measurement result is coming from. One could say that it shows the ‘‘trace’’ which analysts have chosen and along which a measurement result comes to them. It shows to what ‘reference’ [2] the result is metrologically ‘‘trace’’-able. In the simplest case, it leads to the (definition of the) measurement unit. A measurement unit [3] has a measurement uncertainty zero because it is not measured. It therefore, of necessity, ends the metrological traceability chain. The inverse of this metrological traceability chain is a ‘calibration hierarchy’ [4], constituted by one or more sequential calibration steps in this hierarchy. Through a sequence of steps, it runs down from the definition of the measurement unit to the end-user’s measuring system [5] and terminates at the calibration of the analyst’s measuring system, the very purpose of its existence. Thus, it becomes natural and easy to evaluate the cumulative measurement uncertainty of the end-user’s measurement result by walking along the calibration hierarchy of that result from the definition of the measurement unit used down to the analyst’s measuring system. When analysts choose a unit— at the top of the calibration hierarchy—from an internationally agreed measurement system of units, the SI (‘‘Le Systeme international d’unites’’—‘‘The International System of Units’’), or any other unit system such as the cgs (centimetre, gram, second) system, or the mks (metre, kilogram, second) system, or the WHO (World Health Organisation) international unit system, they connect their measurement result to an agreed international reference system. Thus, their results are traceable to this commonly agreed ‘reference’. Any such reference ensures ‘metrological comparability of measurement results’ [6] to other measurement results for the same quantity embodied in any material and traceable to this same reference. This comparability is a basic need we want to see fulfilled: gaining the ability to metrologically compare our results in a metrologically meaningful way. In a chemical measurement—as in any other measurement—an output quantity value of the ‘measurement function’ [7] (i.e. a measured value for the measurand) is obtained as a function of measured input quantity values, of e.g. mass, amount, electric currents, etc [8] and, usually, of influencing quantities [9] such as pressure or temperature yielding small corrections for systematic effects in the measuring system. The measurement uncertainties in these small influencing quantity values do not usually contribute significantly to the uncertainty in the final measurement result, since they are (much) smaller than those of the input quantity values. They have their own metrological traceability chains, which can be seen as ‘‘grafted’’ on the main chain just as branches are grafted on the stem of a tree. Consideration of their measurement uncertainties usually is not very critical. For the purpose of this discussion, it is sufficient to note that their metrological traceability chains also are ‘‘unidirectional’’ i.e. from a result to a reference. In the above discussion, we have chosen a metrological traceability chain and associated calibration hierarchy (its inverse), going up to (the definition of) a chosen measurement unit. Shorter or longer chains are possible depending on whether another reference was decided by the analyst in the planning of the measurement such as a value for the quantity measured as embodied in a specified ‘calibrator’ [10] or the value for the quantity measured as obtained by a chosen ‘reference measurement procedure’ P. De Bievre (&) Kasterlee, Belgium e-mail: paul.de.bievre@skynet.be

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call