Abstract

<h3>Purpose/Objective(s)</h3> Clinical target volume (CTV) definition for post-operative prostate bed radiotherapy is subject to large intra- and inter-observer variations for radiation oncologists, despite various publications of group consensus guidelines. The aim is to study the utility of artificial intelligence (AI) transferred incremental learning for building institutional group Intelligence-based models for post-operative prostate bed CTV and organs-at-risk (OARs) delineation. <h3>Materials/Methods</h3> One hundred and ten prostate bed patients were retrospectively collected and randomly divided into training (n=60) and testing (n=50) groups. Each dataset included physician-approved prostate bed CTV and five OARs (bladder, rectum, left/right femoral head, and penile bulb). The training group dataset was used to build two customized AI models (OAR and CTV) from the existing vendor-provided models. Quantitative and qualitative evaluations were performed for AI-generated contours on the 50 testing datasets in comparison with the corresponding manual ones. The qualitative evaluation was conducted amongst six expert radiation oncologists (RO) tasked to score at four levels (precision, acceptable, minor revision, and manual redraw) and indicated clinical preference given two sets of contours blindly (AI vs. manual). Scoring subjectivity was studied in cross-comparison amongst six ROs. Physician editing times was studied among all six ROs for five patients that had the CTV scored as <i>minor corrections</i> by physician consensus. <h3>Results</h3> Physicians chose the AI-generated OAR and CTV over the manual ones 34% and 32% of the time (range: 2%-54% and 12%-60%). The AI- generated and manual OARs were scored precise or acceptable 90% and 98% of cases (median values), respectively. The AI-generated CTVs were scored equal to or better than the corresponding manual delineation 53% of the time (ranges: 22% to 88% for the six experts). The average physician editing time from the AI-generated CTV was 3m 10s ± 36s. The time required to train the custom models were 27h 20m ± 8m. <h3>Conclusion</h3> Blinded scoring with benchmarking to the manual contours eliminates personal subjectivity. Contour comparison metrics and physician scoring demonstrated comparable results between AI-generated and manual OAR contours. Although almost half of the AI-generated CTVs required physician edit, the editing time is considered moderate. The custom-trained AI models using the transferred learning can facilitate OAR and target contouring efficiency and consistency for prostate bed patients.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call