Abstract

Quantitative muscle and fat data obtained through body composition analysis are expected to be a new stable biomarker for the early and accurate prediction of treatment-related toxicity, treatment response, and prognosis in patients with lung cancer. The use of these biomarkers can enable the adjustment of individualized treatment regimens in a timely manner, which is critical to further improving patient prognosis and quality of life. We aimed to develop a deep learning model based on attention for fully automated segmentation of the abdomen from computed tomography (CT) to quantify body composition. A fully automatic segmentation deep learning model was designed based on the attention mechanism and using U-Net as the framework. Subcutaneous fat, skeletal muscle, and visceral fat were manually segmented by two experts to serve as ground truth labels. The performance of the model was evaluated using Dice similarity coefficients (DSCs) and Hausdorff distance at 95th percentile (HD95). The mean DSC for subcutaneous fat and skeletal muscle were high for both the enhanced CT test set (0.93±0.06 and 0.96±0.02, respectively) and the plain CT test set (0.90±0.09 and 0.95±0.01, respectively). Nevertheless, the model did not perform well in the segmentation performance of visceral fat, especially for the enhanced CT test set. The mean DSC for the enhanced CT test set was 0.87±0.11, while the mean DSC for the plain CT test set was 0.92±0.03. We discuss the reasons for this result. This work demonstrates a method for the automatic outlining of subcutaneous fat, skeletal muscle, and visceral fat areas at L3.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.