Abstract
HomeRadiology: Imaging CancerVol. 2, No. 2 Previous Research HighlightsFree AccessA New Approach for Automated Image Segmentation of Organs at Risk in Cervical CancerMeredith Jones, Lacey R. McNallyMeredith Jones, Lacey R. McNallyMeredith JonesLacey R. McNallyPublished Online:Mar 27 2020https://doi.org/10.1148/rycan.2020204010MoreSectionsPDF ToolsImage ViewerAdd to favoritesCiteTrack CitationsPermissionsReprints ShareShare onFacebookTwitterLinked InEmail Take-Away Points■ Major Focus: Development of an automated multiclass image segmentation model to identify organs at risk (OARs) on CT scans in patients with locally advanced cervical cancer, prior to radiation therapy.■ Key Results: The convolutional neural network model outperformed model-based, intensity-based, and atlas-based segmentation algorithms, with comparable results to other deep learning segmentation models while also decreasing the time needed for image segmentation by several thousand-fold.■ Impact: Reduction of intra- and interuser variability imposed by manual OAR delineation can lead to more accurate and efficient radiation therapy planning.The recent article by Liu et al developed an automated multiclass image segmentation model to identify organs at risk (OARs) on CT scans. To plan effective and safe radiation therapy in the context of cervical cancer, a CT scan is often acquired prior to the first treatment. Currently, a radiation oncologist will segment the CT images by hand, which is a time-consuming process, subject to intra- and interobserver variability, to identify OARs. To overcome these potential limitations and biases, automated image segmentation techniques can be utilized to improve accuracy and speed of annotating each image. A major problem in creation of deep learning image segmentation models for medical images is the lack of available datasets for model training and validation. Liu et al developed a two-dimensional image segmentation model based off the established convolutional neural network architecture, U-Net. U-Net was utilized because it is feasible to train from scratch with a small training set.This study looked at the CT scans of 105 patients with locally advanced cervical cancer. OAR segmentation for cervical cancer is a particularly challenging problem due to variability in organ size and frequent unclear boundaries. Even an experienced radiologist will have trouble. Seven organs were identified as OARs for cervical cancer radiation therapy: bladder, bone marrow, femoral head left, femoral head right, rectum, small intestine, and spinal cord.The model uses context aggregation blocks where U-Net has convolution layers. These context aggregation blocks use dilation convolution layers with different dilated rates and normal convolution layers with different kernel sizes to yield a low false-positive rate and accurate feature extraction in a large receptive field. A squeeze-extract block was used to ensure proper consideration of organs that are small. A class-weighted cross-entropy loss function was also used to overcome the imbalance in the datasets created by the differences in size and shape of each OAR.A ground truth was available based on the radiation oncologist’s annotations, allowing a Dice similarity coefficient (DSC) and 95th Hausdorff distance (HD) to be calculated and used as an evaluation metric. The DSC and the HD metrics indicated a comparable performance to other deep learning image segmentation models in most of the organs analyzed; inferior results were seen in the rectum and superior results in the intestines. All results were analyzed by an oncologist who determined that greater than 90% of results produced by this model were highly acceptable to be used during radiation therapy planning. To make the model more robust and generalizable, it is necessary to acquire a larger dataset with scans from different sources. Translation of this model to three dimensional may ultimately improve clinical translation.Highlighted ArticleLiu Z, Liu X, Xiao B, et al. Segmentation of organs-at-risk in cervical cancer CT images with a convolutional neural network. Physica Medica 2020;69:184-191. doi: https://doi.org/10.1016/j.ejmp.2019.12.008Highlighted ArticleLiu Z, Liu X, Xiao B, et al. Segmentation of organs-at-risk in cervical cancer CT images with a convolutional neural network. Physica Medica 2020;69:184-191. doi: https://doi.org/10.1016/j.ejmp.2019.12.008 Crossref, Medline, Google ScholarArticle HistoryPublished online: Mar 27 2020 FiguresReferencesRelatedDetailsCited ByJournal of Biomimetics, Biomaterials and Biomedical Engineering, Vol. 49Mathematical Problems in Engineering, Vol. 2020Recommended Articles nnU-Net: Further Automating Biomedical Image AutosegmentationRadiology: Imaging Cancer2021Volume: 3Issue: 1Deep Learning: A Primer for RadiologistsRadioGraphics2017Volume: 37Issue: 7pp. 2113-2131Generative Adversarial Networks: A Primer for RadiologistsRadioGraphics2021Volume: 41Issue: 3pp. 840-857Current Applications and Future Impact of Machine Learning in RadiologyRadiology2018Volume: 288Issue: 2pp. 318-328Deep Generative Adversarial Networks: Applications in Musculoskeletal ImagingRadiology: Artificial Intelligence2021Volume: 3Issue: 3See More RSNA Education Exhibits A Multidisciplinary Approach for Program Development with Artificial Intelligence in Pancreatic Cancer: How We Fit InDigital Posters2019Abnormal Changes After Radiation Therapy: What Radiologists Should KnowDigital Posters2018Artificial Intelligence in Diagnostic Imaging: Current Applications and Future PerspectiveDigital Posters2019 RSNA Case Collection Acute Radiation EnteropathyRSNA Case Collection2021Small Bowel LymphomaRSNA Case Collection2021Hirschsprung DiseaseRSNA Case Collection2021 Vol. 2, No. 2 Metrics Downloaded 310 times Altmetric Score
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.