Abstract
Multiple commercial, open-source, and academic software tools exist for objective quantification of lung density in computed tomography (CT) images. The purpose of this study was to evaluate the intersoftware reproducibility of CT lung density measurements. Computed tomography images from 50 participants from the COPDGeneTM cohort study were randomly selected for analysis; n=10 participants across each global initiative for chronic obstructive lung disease (GOLD) grade (GOLD 0-IV). Academic-based groups (n=4) and commercial vendors (n=4) participated anonymously to generate CT lung density measurements using their software tools. Computed tomography total lung volume (TLV), percentage of the low attenuation areas in the lung with Hounsfield unit (HU) values below -950HU (LAA950 ), and the HU value corresponding to the 15th percentile on the parenchymal density histogram (Perc15) were included in the analysis. The intersoftware bias and reproducibility coefficient (RDC) was generated with and without quality assurance (QA) for manual correction of the lung segmentation; intrasoftware bias and RDC was also generated by repeated measurements on the same images. Intersoftware mean bias was within ±0.22mL, ±0.46%, and ±0.97HU for TLV, LAA950 and Perc15, respectively. The RDC was 0.35L, 1.2% and 1.8HU for TLV, LAA950 and Perc15, respectively. Intersoftware RDC remained unchanged following QA: 0.35L, 1.2% and 1.8HU for TLV, LAA950 and Perc15, respectively. All software investigated had an intrasoftware RDC of 0. The RDC was comparable for TLV, LAA950 and Perc15 measurements, respectively, for academic-based groups/commercial vendor-based software tools: 0.39L/0.32L, 1.2%/1.2%, and 1.7HU/1.6HU. Multivariable regression analysis showed that academic-based software tools had greater within-subject standard deviation of TLV than commercial vendors, but no significant differences between academic and commercial groups were found for LAA950 or Perc15 measurements. Computed tomography total lung volume and lung density measurement bias and reproducibility was reported across eight different software tools. Bias was negligible across vendors, reproducibility was comparable for software tools generated by academic-based groups and commercial vendors, and segmentation QA had negligible impact on measurement variability between software tools. In summary, results from this study report the amount of additional measurement variability that should be accounted for when using different software tools to measure lung density longitudinally with well-standardized image acquisition protocols. However, intrasoftware reproducibility was deterministic for all cases so use of the same software tool to reduce variability for serial studies is highly recommended.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.