Abstract

No. Most goodness-of-fit (GOF) tests attempt to discern a preferred weighting using either absolute or relative errors in the back-calculated calibration x values. However, the former are predisposed to select constant weighting and the latter 1/x2 or 1/y2 weighting, no matter what the true weighting should be. Here, I use Monte Carlo simulations to quantify the flaws in GOF tests and show why they falsely prefer their predisposition weighting. The weighting problem is solved properly through variance function (VF) estimation from replicate data, conveniently separating this from the problem of selecting a response function (RF). Any weighting other than inverse-variance must give loss of precision in the RF parameters and in the estimates of unknowns x0. In particular, the widely used 1/x2 weighting, if wrong, not only sacrifices precision but even worse, appears to give better precision at small x, leading to falsely optimistic estimates of detection and quantification limits. Realistic VFs typically become constant in the low-x, low-y limit. Thus, even when 1/x2 weighting is correct at large signal, the neglect of the constant variance component at small signal again gives too-small detection and quantification limits. VF estimation has been disparaged as too demanding of data. Why this is not true is demonstrated with Monte Carlo simulations that show only a few percent increase in calibration parameter uncertainties when the VF is estimated from just three replicates at each of six calibration x values. This point is further demonstrated using examples from the recent literature.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call