Abstract

Blood-based indicators that are used in the assessment of iron status are assumed to be accurate. In practice, inaccuracies in these measurements exist and stem from bias and variability. For example, the analytic variability of serum ferritin measurements across laboratories is very high (>15%), which increases the rate of misclassification in clinical and epidemiologic studies. The procedures that are used in laboratory medicine to minimize bias and variability could be used effectively in clinical research studies, particularly in the evaluation of iron deficiency and its associated anemia in pregnancy and early childhood and in characterizing states of iron repletion and excess. The harmonization and standardization of traditional and novel bioindicators of iron status will allow results from clinical studies to be more meaningfully translated into clinical practice by providing a firm foundation for clinical laboratories to set appropriate cutoffs. In addition, proficiency testing monitors the performance of the methods over time. It is important that measures of iron status be evaluated, validated, and performed in a manner that is consistent with standard procedures in laboratory medicine.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call