The reciprocal relation between coagulation time t and concentration S of coagulation factors (in one-stage assays substrates in the reactions involved) can be written as t = t min + e 1 S . Here t min is the minimum possible coagulation time and e the sensitivity of the method to a change in the concentration of substrate. As the variance of coagulation time strongly increases with increasing dilution of the coagulation factors in calibration this equation should be multiplied by S for harmonizing the variance: t · S = e + t min · S . The plot of t · S against S (parametrizing straight line) now gives statistically controllable numerical values for e and t min. Deviations at low concentrations that are associated with the diluent used and caused by lack of fibrinogen, change of the pH, glass activation, or the residual content of adsorbed plasma, can be easily recognized and in most cases compensated. The minimum number of individual plasmas required for calibration can be given as well as the optimum spacing of the plasma concentrations. With e and t min known a nomogram can be constructed relating coagulation time to the content of coagulation factors in the patients' plasmas. The content can also be determined by recording a parametrizing straight line on the patient's plasma - a variant to be preferred for contents near to normal and if inhibitors are to be expected. A comparison of several commercial thromboplastin time reagents as to their sensitivity was made and the standardization of the British Comparative Thromboplastin re-examined. Single factor sensitivities can be computed and the usability of coagulometers can be considered. As the proposed method controls procedures, reagents, and coagulometers an international standardization of coagulation time measurement on its basis is thought possible.
Read full abstract