Abstract Background Sigma measures the number of SDs (z-value) from the existing sample mean to the nearest analytical performance standard or allowable error limit. Authors and software programs often measure sigma metrics for each QC sample but use an average sigma to compare methods and select QC strategies. That practice leads to dramatic over or under-estimation of the number of errors reported and the selection of inappropriate QC strategies. Methods Data samples were created to produce sigma values of 3.0, 4.5, and 6.0. Microsoft Excel function NORMSDIST was used to convert sigma to percent to number of errors per million patients. NORMSINV was used to convert the number of errors per million patients to sigma. Results A. Six sigma represents a method with a failure rate of 0.000001% or 0.001 failures of ASP/TEa per million patients. B. Three sigma represents a method with a failure rate of 0.135 percent or 1,350 failures of ASP/TEa per million patients. C. While the average sigma value of samples A and B was 4.5s, the average error rate was 0.0675 percent or 675 failures of ASP/TEa per million patients. D. An error rate was 0.0675 percent converts with the NORMSINV function to a sigma of 3.21. E. A true 4.5 sigma method would have a failure rate of 0.00034 percent or 3.4 failures of ASP/TEa per million patients. Conclusions Sigma studies that present an average sigma value underestimate the true number of errors reported. It would be more scientifically correct to either report the number of errors reported or to report the average sigma value based on the average number of errors. Laboratory professionals should interpret sigma studies and publications cautiously if a single sigma is used to represent two or more data sets.
Read full abstract