Abstract

BackgroundSkeletal muscle segmentation is an important procedure for assessing sarcopenia, an emerging imaging biomarker of patient frailty. Data annotation remains the bottleneck for training deep learning auto‐segmentation models.PurposeThere is a need to define methodologies for applying models to different domains (e.g., anatomical regions or imaging modalities) without dramatically increasing data annotation.MethodsTo address this problem, we empirically evaluate the generalizability of various source tasks for transfer learning: natural image classification, natural image segmentation, unsupervised image reconstruction, and self‐supervised jigsaw solving. Axial CT slices at L3 were extracted from PET‐CT scans for 204 oesophago‐gastric cancer patients and the skeletal muscle manually delineated by an expert. Features were transferred and segmentation models trained on subsets (n=5,10,25,50,75,100,125) of the manually annotated training set. Four‐fold cross‐validation was performed to evaluate model generalizability. Human‐level performance was established by performing an inter‐observer study consisting of ten trained radiographers.ResultsWe find that accurate segmentation models can be trained on a fraction of the data required by current approaches. The Dice similarity coefficient and root mean square distance‐to‐agreement were calculated for each prediction and used to assess model performance. Models pre‐trained on a segmentation task and fine‐tuned on 10 images produce delineations that are comparable to those from trained observers and extract reliable measures of muscle health.ConclusionsAppropriate transfer learning can generate convolutional neural networks for abdominal muscle segmentation that achieve human‐level performance while decreasing the required data by an order of magnitude, compared to previous methods (n=160→10). This work enables the development of future models for assessing skeletal muscle at other anatomical sites where large annotated data sets are scarce and clinical needs are yet to be addressed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.