Abstract

Patients with interstitial lung disease (ILD) treated with thoracic radiotherapy (RT) are at greater risk of pulmonary toxicity. Automatic universal screening for ILD allows radiation oncologists (ROs) to risk stratify patients and implement necessary modifications to their respiratory monitoring or treatment. Automatic screening however may affect RO workload and so it is imperative to assess the clinical acceptability of this tool. We have developed a machine learning algorithm to identify patients who are at high risk of having ILD based on RT planning computed tomography (CT) images. A quality improvement (QI) project was initiated to test feasibility and acceptability of the machine learning algorithm. If positive, the results of the machine learning algorithm were made available to ROs via structured electronic reporting. ROs were prompted to review the patient and consider expert radiologist consultation if thought appropriate. All electronic surveys and qualitative comments were summarized to describe clinical acceptability. Expert radiologist established gold standard ILD status of all patients on the study. A formal review of RO feedback was collected for all screen-positive, true-positive cases. Two hundred forty cases were screened of which 45 were flagged as AI-ILD positive and the responsible RO notified. Of these 45 screen-positive cases, all continued on to RT except for 3 patients with tumor progression. From these 45, 24 surveys were completed, 21 had no prior suspicion of ILD. There were 7 true-positives, of which 1 had a survey response. Based on the survey responses, 88% of cases underwent review by the responsible RO. In 16 cases this automatic notification prompted case consultation with an expert radiologist. Expert review was performed from 10 minutes up to 53 hours after the email prompt to the radiologist, with median response time of 1.5 hours. In the 7 screen-positive, true-positive cases, only 2 were not previously known to the responsible RO. In the two cases where true-positive ILD status was previously unknown, one was a mild case of ILD and the other had previously received thoracic RT at this institution without ILD being identified, in both cases the ROs were grateful that this diagnosis was identified prior to treatment. RO confidence in the machine learning prediction was moderate due to the high proportion of false positives. Based on available survey results, more than 75% of the screen-positive cases were reviewed by the responsible RO and two-thirds of these involved expert radiology input. RO feedback was generally positive and this tool was rated as a net benefit despite the high rate of false-positives and the need for clarification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call