Abstract

In lung cancer radiation therapy, clinicians must outline the gross tumor volume (GTV) precisely on the planning computed tomography (pCT) for accurate radiation dose delivery. However, due to the limited contrast between tumor and normal tissues in lung parenchyma, accurate delineation of tumor boundaries is difficult leading to large inter-observer variation. In this study, we develop an anatomy-guided lung GTV deep segmentation model using a training cohort of multi-center datasets. The quantitative segmentation performance is evaluated on an independent dataset, where the inter-observer delineation variation is also assessed. We collected and curated four publicly available lung datasets with GTV annotations (Lung-PET-CT-Dx, LIDC-IDRI, NSCLC-Radiogenomics and RIDER-CT) for deep learning model development. A total of 871 CT scans of patients, who were diagnosed with T1-T4 NSCLC, were available for training after data curation. The GTV annotations of primary tumor were examined and edited by two experienced radiation oncologists following the RTOG 1106 protocol. An anatomy-guided deep learning model was proposed, which consisted two deep networks. The first deep network used CT scan as input and segmented 4 anatomic organs (airway, heart, pulmonary artery and pulmonary vein), while the second deep network took both CT scan and these pre-segmented 4 organs as input and segmented the lung GTV. With the help of anatomic priors from 4 pre-segmented organs, the second deep network could more easily locate the GTV. We used nnUNet as the deep segmentation network. For evaluation, we used NSCLC-Radiomics as the testing dataset, which contains 20 CT scans each annotated by 5 radiation oncologists. The auto-segmented GTV were compared against each of the manual GTV reference. Inter-observer variation was also assessed using the 5 manual GTV references. The proposed anatomic-guided lung GTV segmentation model achieved a mean Dice score of 82.4% and 95% Hausdorff distance (HD95) of 6.9mm when averaged cross 20 patients and 5 GTV references (Table 1), which outperformed the basic deep GTV segmentation model by markedly reducing 19.4% HD95 error. The performance of proposed model was also comparable to the inter-observer variation (Dice score: 82.4% vs. 81.9%, HD95 6.9 vs. 6.4mm), indicating that our model had similar reproducibility as human observers. We developed and tested an anatomy-guided deep learning model for segmenting GTV in NSCLC patients. The model achieves high quantitative segmentation performance, which is comparable to the human observer variation. It can be potentially used in radiotherapy practice to improve GTV delineation consistency and reduce workloads of radiation oncologists.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call