Abstract

PurposeTo develop a QA procedure, easy to use, reproducible and based on open-source code, to automatically evaluate the stability of different metrics extracted from CT images: Hounsfield Unit (HU) calibration, edge characterization metrics (contrast and drop range) and radiomic features. MethodsThe QA protocol was based on electron density phantom imaging. Home-made open-source Python code was developed for the automatic computation of the metrics and their reproducibility analysis. The impact on reproducibility was evaluated for different radiation therapy protocols, and phantom positions within the field of view and systems, in terms of variability (Shapiro-Wilk test for 15 repeated measurements carried out over three days) and comparability (Bland-Altman analysis and Wilcoxon Rank Sum Test or Kendall Rank Correlation Coefficient). ResultsRegarding intrinsic variability, most metrics followed a normal distribution (88% of HU, 63% of edge parameters and 82% of radiomic features). Regarding comparability, HU and contrast were comparable in all conditions, and drop range only in the same CT scanner and phantom position. The percentages of comparable radiomic features independent of protocol, position and system were 59%, 78% and 54%, respectively. The non-significantly differences in HU calibration curves obtained for two different institutions (7%) translated in comparable Gamma Index G (1 mm, 1%, >99%). ConclusionsAn automated software to assess the reproducibility of different CT metrics was successfully created and validated. A QA routine proposal is suggested.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call