Abstract

<h3>Purpose/Objective(s)</h3> Treating lung tumors with stereotactic body radiotherapy (SBRT) unavoidably damages normal lung tissue surrounding the target tumor. As a result, physicians must closely monitor follow-up computed tomography (CT) scans for radiographic signs of radiation-induced lung injury (RILI). The differentiation between the different types of RILI, tumor recurrence, and/or tumor scar tissue can be difficult for clinicians to evaluate visually. Recent studies have shown the effectiveness of modern convolutional neural networks (CNN) for imaging analysis with artificial intelligence (AI) methods. This work explored the utilization of such AI methods with post-SBRT surveillance CT images from a database generated from three separate institutions for their ability to help clinicians recognize RILI. <h3>Materials/Methods</h3> This work utilized a database of post-SBRT follow-up CT scans which radiation oncologists had previously evaluated for RILI. Due to the relative paucity of training data, the proposed approach uses a powerful AI paradigm called self-supervised learning (SSL) to initialize a CNN on a widely available diagnostic CT database of lung cancer. Using the derived radiomic features and network weights as part of a 3D ResNet-10, a fully connected classifier layer was then trained for discriminating between RILI and normal lung tissue on 222 labeled post-SBRT CT scans of 64 subjects from three independent institutions. On each of these scans, RILI had been independently evaluated by a radiation oncologist. <h3>Results</h3> The final classification performance of the best performing CNN achieved an area under ROC curve (ROC-AUC) of 0.894 where 0.5 would be achieved by randomly guessing and 1.0 would be a perfect discrimination of RILI on the test set. The ROC-AUC score was obtained as the average ROC-AUC across folds when using 5-fold cross-validation on the dataset. <h3>Conclusion</h3> This work suggests that valuable radiomic features exist within 3D CT imagery that allows for the determination of post-SBRT RILI, and this can be learned automatically using modern artificial intelligence techniques. Furthermore, training a deep neural network for RILI classification produces noticeably high classification performance which may prove useful in the clinical setting to automatically flag scans with high probabilities of RILI for closer analysis by clinical experts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call