Abstract

PurposeLiver biopsy was long considered the reference standard for measuring liver iron concentration. However, its high sampling variability and invasive nature make it poorly suited for serial analyses. To demonstrate the fallibility of liver biopsy, we use serial estimates of iron chelation efficiency (ICE) calculated by R2 and R2* MRI liver iron concentration (LIC) estimates as well as by simulated liver biopsy (over all physically reasonable sampling variability) to compare the robustness of these three techniques. Materials and MethodsR2, R2*, transfusional volume, and chelator compliance were obtained from 49 participants in a phase II clinical trial of deferitazole over two years. Liver biopsy LIC results were simulated using sampling errors of 0%, 10%, 20%, 30%, 40% and iron assay variability of 12%. LIC estimates by R2, R2*, and simulated biopsy were used to calculate ICE over time. Bland–Altman limits of agreement were compared across observation intervals of 12, 24, and 48weeks. ResultsAt 48week intervals, LIC estimates by R2, R2* and “perfect” liver biopsy had comparable accuracy in predicting ICE; both MRI methods were superior to any physically realizable liver biopsy (sampling error 10% or higher). LIC by R2* demonstrated the most robust ICE estimates at monitoring intervals of 24 and 12weeks, but this difference did not remain significant at 48week intervals. ConclusionMRI relaxometry is superior to liver biopsy for serial LIC observations, such as used in the care of tranfusional siderosis patients, and should also be considered the new standard of LIC determination for regulatory purposes. Among relaxometry techniques, LIC estimates by R2* are more robust for tracking changes in iron balance over intermediate time scales (<=24 weeks).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call