Abstract

PurposeSegmentation of organs‐at‐risk (OARs) is an essential component of the radiation oncology workflow. Commonly segmented thoracic OARs include the heart, esophagus, spinal cord, and lungs. This study evaluated a convolutional neural network (CNN) for automatic segmentation of these OARs.MethodsThe dataset was created retrospectively from consecutive radiotherapy plans containing all five OARs of interest, including 22,411 CT slices from 168 patients. Patients were divided into training, validation, and test datasets according to a 66%/17%/17% split. We trained a modified U‐Net, applying transfer learning from a VGG16 image classification model trained on ImageNet. The Dice coefficient and 95% Hausdorff distance on the test set for each organ was compared to a commercial atlas‐based segmentation model using the Wilcoxon signed‐rank test.ResultsOn the test dataset, the median Dice coefficients for the CNN model vs. the multi‐atlas model were 71% vs. 67% for the spinal cord, 96% vs. 94% for the right lung, 96%vs. 94% for the left lung, 91% vs. 85% for the heart, and 63% vs. 37% for the esophagus. The median 95% Hausdorff distances were 9.5 mm vs. 25.3 mm, 5.1 mm vs. 8.1 mm, 4.0 mm vs. 8.0 mm, 9.8 mm vs. 15.8 mm, and 9.2 mm vs. 20.0 mm for the respective organs. The results all favored the CNN model (P < 0.05).ConclusionsA 2D CNN can achieve superior results to commercial atlas‐based software for OAR segmentation utilizing non‐domain transfer learning, which has potential utility for quality assurance and expediting patient care.

Highlights

  • Accurate delineation of tumor volumes and organs‐at‐risk (OARs) is an essential component of the radiation oncology workflow

  • Patients were divided into training, validation, and test datasets according a 66%/17%/17% split (n = 112/28/28)

  • This paper presents a deep convolutional neural network (CNN) trained on a large dataset of routinely contoured patients

Read more

Summary

| INTRODUCTION

Accurate delineation of tumor volumes and organs‐at‐risk (OARs) is an essential component of the radiation oncology workflow. Numerous methods have been attempted for automatic segmentation of thoracic organs, including atlas‐based methods, level‐set methods, and morphological methods While CNNs initially achieved state‐of‐the‐art results in image classification tasks, their use in semantic segmentation was initially proposed by Long et al.[4]. Deep CNNs have achieved state‐of‐the‐art results in medical image segmentation problems, such as magnetic resonance imaging‐based segmentation of the brain and prostate.[5,6]. We hypothesized that our methodology would allow for convergence of a 2D neural network model with acceptable accuracy, low GPU overhead, and fast inference times

| MATERIALS AND METHODS
| RESULTS
| DISCUSSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call