Abstract

During a recent holiday in the mountainous center of Kriti (in Axos for the interested), a sentence struck me in an interesting book [1]: ‘‘Admittedly, one reason why statistical arguments sometimes fail to persuade is that different statistical methods may produce varying results and the investigators are suspected of choosing the method most favorable to their arguments. The range of statistical techniques available to the econometrician is so wide that the zealous advocate can often ‘‘torture the data until they confess.’’ These sentences are worth being reflected upon and, maybe, they should be propagated and applied a bit more in other contexts than the purely economical, because they could be of general applicability. A case in point may be the treatment of ‘measurement results’ (see entry 2.9 in [2]) of ‘interlaboratory comparisons’ (ILCs) in chemistry where such results are often ‘‘assumed to be normal’’ but frequently are not [3]. Sometimes (or often?) they are trimmed, combed, submitted to selective ‘‘cutting’’ procedures, or other kinds of treatment, until they fit the a priori model of ‘‘data distribution’’, mostly ‘‘assumed to be normal’’. Any value not conforming to that distribution -or disturbing it somehowis looked at with suspicion. Apparently, we prefer to see what they should look like according to an a priori conceived picture rather than see them as they are. One can make the following observations. Mostly people are interested in the value ‘‘closest to the truth’’, or at least ‘‘close to the truth’’ and they assume that such a value is located at -or lying close tothe central location of the distribution of the results. Hence, they look for the average of the ‘measured quantity values’ (see entry 2.10 in [2]). When the own measurement result also finds itself near that location, that generates the comfortable feeling of ‘‘being where most of the values are found’’ and certainly the conclusion that ‘‘the best value cannot be far away from that’’, and also ‘‘that could not be wrong, could it?’’ Hence, ‘measurement results’ close to the center of a distribution are important and deserve full attention, not the (very) low or (very) high values. Remarkably enough, that does not happen. Rather much of the attention is concentrated on the low and high values, and much effort is spent in finding reasons to eliminate them. Why? Clearly, because that makes the calculated standard deviation of the average smaller. However, eliminating extreme values does not change very much the location of the average. Rather the spread around the ‘‘comfortable’’ average value is reduced, which gives the average still more authority. This approach is all the more interesting if the distribution of the values becomes more ‘‘normal’’. Are we then looking at a self-fulfilling prophecy’’? Or, better, at a ‘‘self-fulfilling reasoning’’ tweaked to confirm an assumption (of normal distribution) already made on beforehand? Is it logical to proceed along a reasoning which reduces the standard deviation per se in order to increase the ‘‘trust’’ in the average? Since the most centrally located ‘quantity value’ is determined by the most centrally located measured values and not by a few extraneous ones, the suspicion arises that we eliminate the extreme values in

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.