Abstract

HomeRadiology: Imaging CancerVol. 3, No. 1 Previous Research HighlightsFree AccessnnU-Net: Further Automating Biomedical Image AutosegmentationRicky SavjaniRicky SavjaniRicky SavjaniPublished Online:Jan 29 2021https://doi.org/10.1148/rycan.2021209039MoreSectionsPDF ToolsImage ViewerAdd to favoritesCiteTrack CitationsPermissionsReprints ShareShare onFacebookTwitterLinked InEmail Take-Away Points■ Major Focus: Artificial intelligence networks generally perform well only on specific datasets and very particular structures, limiting widespread implementation.■ Key Result: The self-configuring network design of the algorithm nnU-Net provides a new benchmark for segmenting organs, tumors, and cells without manual segmentation.■ Impact: nnU-Net is an out-of-the-box tool that will allow clinics and laboratories to train this model in just days, which offers a path to broader adaptation to applications for automated image segmentation in cancer imaging.Automated image segmentation (autosegmentation) uses computational algorithms to define three-dimensional volumes of anatomic features of interest in imaging studies, including tumors and nearby organs. Clinical translation of autosegmentation promises to improve speed and reduce interobserver variability for planning radiation therapy, identifying tumors, quantifying response to therapy, and constructing downstream applications such as radiomics. While autosegmentation typically performs well with specific training sets or limited types of structures, algorithms commonly fail to generalize well to other datasets or tasks, preventing widespread adoption.To overcome this challenge, Isensee et al developed nnU-Net, an autosegmentation framework that eliminates manual steps, which include preprocessing, network architecture engineering, training, and postprocessing. Instead, nnU-Net uses a set of readily accessible rules derived from the underlying data to guide the construction of the neural network and associated data manipulation. nnU-Net does not create a new network design (hence its clever name: “no new net”). Rather, the true discovery lies in the set of systematic rules to build and train models fully automatically. The authors showcase the power of nnU-Net by running their framework in 23 unique medical image segmentation challenges in a variety of modalities including CT, MRI, and even electron microscopy. Quite impressively, nnU-Net achieved top-level performance ranks on all challenges without creating any custom modifications.The authors released their PyTorch code publicly (https://github.com/MIC-DKFZ/nnUNet) and created a Linux-based command-line tool to run nnU-Net. I personally tested nnU-Net for segmenting the gross tumor volume for oropharyngeal head and neck cancers as defined by the 2020 MICCAI HECKTOR challenge (https://www.aicrowd.com/challenges/miccai-2020-hecktor). I downloaded the code and trained the out-of-the-box nnU-Net model with three-dimensional full-resolution CT and PET images from the HECKTOR challenge data consisting of 201 patients with oropharyngeal tumors. I trained the nnU-Net model with fivefold cross-validation on five graphics processing units (NVIDIA V100s 16 GB) in parallel over 3 days. Without any manual modifications, the algorithm produced a 74.7% Dice score, a measure of similarity between two samples, on the test set, placing third place overall in the challenge in the postchallenge leaderboard.nnU-Net provides a new approach and benchmark for autosegmentation models across several domains of medical image segmentation. By removing manual steps in data processing and network engineering, nnU-Net paves the way ahead for widespread adoption of automated medical image segmentation.Highlighted ArticleIsensee F, Jaeger PF, Kohl SAA, Petersen J, Maier-Hein KH. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods 2020; Published December 7, 2020. doi: 10.1038/s41592-020-01008-z.Highlighted ArticleIsensee F, Jaeger PF, Kohl SAA, Petersen J, Maier-Hein KH. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods 2020; Published December 7, 2020. doi: 10.1038/s41592-020-01008-z. Crossref, Medline, Google ScholarArticle HistoryPublished online: Jan 29 2021 FiguresReferencesRelatedDetailsRecommended Articles A New Approach for Automated Image Segmentation of Organs at Risk in Cervical CancerRadiology: Imaging Cancer2020Volume: 2Issue: 2Will AI Improve Tumor Delineation Accuracy for Radiation Therapy?Radiology2019Volume: 291Issue: 3pp. 687-688Deep Learning: A Primer for RadiologistsRadioGraphics2017Volume: 37Issue: 7pp. 2113-2131Deep Generative Adversarial Networks: Applications in Musculoskeletal ImagingRadiology: Artificial Intelligence2021Volume: 3Issue: 3Current Applications and Future Impact of Machine Learning in RadiologyRadiology2018Volume: 288Issue: 2pp. 318-328See More RSNA Education Exhibits Artificial Intelligence in Diagnostic Imaging: Current Applications and Future PerspectiveDigital Posters2019A Multidisciplinary Approach for Program Development with Artificial Intelligence in Pancreatic Cancer: How We Fit InDigital Posters2019Head and Neck Radiation Therapy: A Primer for RadiologistsDigital Posters2019 RSNA Case Collection Buried Bumper Peg SyndromeRSNA Case Collection2021Lemierre's SyndromeRSNA Case Collection2020Dedifferentiated Thyroid CancerRSNA Case Collection2020 Vol. 3, No. 1 Metrics Downloaded 281 times Altmetric Score PDF download

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.