Abstract

<h3>Purpose/Objective(s)</h3> Peer review is reflective of the entire patient treatment process with clinical information, therapeutic parameters and potential treatment variations. Our hypothesis is that using artificial intelligence (AI), we can enhance the efficacy of the peer review process by screening cases with a potential for treatment interruption. <h3>Materials/Methods</h3> From 3,899 radiotherapy patients (7,168 plans) treated from 2014-2021 in our department, 36 features of clinical and therapeutic parameters were used as input for two AI models: multivariable least absolute shrinkage and selection operator (LASSO) logistic regression model and pattern recognition feed forward neural network (NN). LASSO is a shrinkage regularization method that assigns coefficients to significantly useful features. NN passes input features through multiple hidden nodes to assign weights and biases for feature selection. Each method results in continuous probability of treatment interruption. Performance metrics of accuracy, sensitivity and specificity were calculated for evaluations. <h3>Results</h3> Overall, 8.1% of all cases had treatment interruptions, most commonly in head and neck (18.8%) compared to other sites (19% vs 7-9%, p<0.01). For LASSO model, testing set sensitivity, specificity and accuracy ranged from 65-89%, 31-53% and 35-54%, respectively, with higher sensitivity than specificity. Spine/Extremity and Brain sites had the highest accuracy (54%). Higher sensitivity than specificity indicated that the model was more able to predict true positives. For NN model, testing set sensitivity, specificity and accuracy ranged from 23-62%, 66-90% and 64-86%, respectively. Higher specificity than sensitivity was observed. The Brain site had highest accuracy (86%). Accuracy of NN was higher than LASSO for all treatment sites (68% vs. 50%), which is particularly true for the brain (86% vs. 54%) and the pelvis/prostate (77% vs. 35%). This result indicates that the linear hyperplane of LASSO was insufficient to classify the underlying complex clinical dataset accurately. Including more input features is expected to improve the accuracy for both models. <h3>Conclusion</h3> Our results provide proof-of-concept that AI can be used as a screening tool to aid the peer review process. It can help to predict treatment interruptions, which include replanning or treatment cessation. Early identification of patients at risk of radiotherapy interruptions using this method may potentially translate into higher treatment completion rates. The study is being continued at our institution to further explore the capabilities of AI in the peer review process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call