Abstract

Error in chemical analysis is propagated mainly by multiplication (not addition) of random, systematic and spurious errors. Individual random errors tend to have symmetrical frequency distributions but their combined error distribution has a positive skew. Certain systematic errors (bias) conceivably could have frequency distributions which would enhance or lessen the overall skew but they are unlikely to produce a truly normal distribution. Each analytical method, or modification of it, may produce a unique frequency distribution of results. Hence an ideal general statistical treatment of results cannot exist and the best practical compromise should be utilised. Three simple statistical treatments of data produced from various analytical models were compared, to identify the best compromise. Conventional statistics, with no transformation of data, generally treated low results too favourably and high results too harshly. Prior transformation of results to logarithms tended to do the reverse. Transformation of results to factors, followed by derivation of a robust standard deviation, treated the extremes more equally, if somewhat harshly. Factor statistics for precision have low sensitivity to outliers and the assigned true value and they offer a good compromise for the description of analytical data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.