Abstract

Body composition assessment using computed tomography (CT) images at the L3-level is increasingly applied in cancer research. Robust high-throughput automated segmentation is key to assess large patient cohorts and to support implementation of body composition analysis into routine clinical practice. We trained and externally validated a deep learning neural network (DLNN) to automatically segment L3-CT images. Expert-drawn segmentations of visceral and subcutaneous adipose tissue (VAT/SAT) and skeletal muscle (SM) of L3-CT-images of 3,187 patients undergoing abdominal surgery were used to train a DLNN. The external validation cohort was comprised of 2,535 patients with abdominal cancer. DLNN performance was evaluated with (geometric) Dice Similarity (DS) and Lin's Concordance Correlation Coefficient. There was a strong concordance between automatic and manual segmentations with median DS for SM, VAT, and SAT of 0.97 (interquartile range, IQR: 0.95-0.98), 0.98 (IQR: 0.95-0.98), and 0.95 (IQR: 0.92-0.97), respectively. Concordance correlations were excellent: SM 0.964 (0.959-0.968), VAT 0.998 (0.998-0.998), and SAT 0.992 (0.991-0.993). Bland-Altman metrics indicated only small and clinically insignificant systematic offsets; SM radiodensity: 0.23 hounsfield units (0.5%), SM: 1.26 cm2.m-2 (2.8%), VAT: -1.02 cm2.m-2 (1.7%), and SAT: 3.24 cm2.m-2 (4.6%). A robustly-performing and independently externally validated DLNN for automated body composition analysis was developed. CT-based body composition analysis is highly prognostic for long-term overall survival in oncology. This DLNN was succesfully trained and externally validated on several large patient cohorts and will therefore enable large scale population studies and implementation of body composition analysis into clinical practice.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.