Abstract

Numerous empirical studies show that the accuracy of international normalized ratio (INR) measurements is unsatisfactory and worse than generally expected. We demonstrate that a plausible reason for this large inaccuracy is a conventional calibration procedure of reference preparations with (i) an erroneous assumption that the line relating logarithmic prothrombin times (log PTs) of patients passes through the mean log PT of the 'normal' population (mean normal PT); (ii) non-perceived interactions between patients and PT systems; and (iii) systematic exclusions of 'outliers'. The same conventional procedure also results in serious overestimation of the accuracy of INR measurements, thus leading to a false sense of security in oral anticoagulant therapy. In an example with data from WHO guidelines, we show that the systematic overprediction of INR (which is believed to be 0) may be as large as 5%, when prediction is performed under the conventional WHO model. Under the same model the CV of the predicted vs. the true INR is believed to be only about 1% when it in reality is more than 4%. We suggest that the conventional calibration procedure is modified in order to reduce the twofold negative impact of lower true accuracy and overestimated reported accuracy on oral anticoagulant therapy and to allow for an unambiguous definition of true INR values.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call