Abstract

Tumor radiotherapy target delineations are affected by the personal experience and styles of different radiation physicians even though all the delineations are based on the guidelines or consensus. It brings difficulties to auto-delineation using artificial intelligence (AI). Therefore, we hope to build a deep learning model with style adaptation algorithm for automatic clinical target volume (CTV) segmentation in rectal cancer to adapt to different contouring style of different radiation physicians based on a baseline model with active learning and attention mechanism.Planning-computed tomography (CT) data sets for 192 rectal cancer patients who were treatment by one radiation physician at our institution were selected retrospectively. Of these patients, 172 cases were used for training and 20 for testing the baseline auto-delineation model based on CMU-Net approach. Multi-U-Net (CMU-Net), a novel segmentation approach, recognized the upper and lower bounds of CTV and then segmented them accurately. Then, we selected another 173 patients who were treated by another two radiation physicians (99 and 84 patients, respectively) at our institution to develop style adaptation models based on the baseline auto-delineation model by using deep active learning and attention mechanism. These data (training sets: 79 and 64, testing sets: 20 and 20) were also used to build two models (model 1 and 2) as the comparison to the style adaptation results. In order to explore how many patients were enough for style adaption model to achieve the same accuracy as model 1 and 2, we gradually increased the cases to the style transfer mode. Ten patients were added to the model training each time. Finally, dice similarity coefficient (DSC) was assessed and compared between models and manual contours for testing sets.There were seven (model 1a - 1g) and six (model 2a - 2f) style adaption models were built to compare with model 1 and 2, respectively. The mean DSC of the baseline model, model 1 and 2 were 94.28, 92.43, 91.57, respectively. The style adaption model 1a - 1g were 79.12, 85.29, 90.57, 92.12, 93.01, 93.33, 94.56; and the style adaption model 2a - 2f were 87.54, 91.27, 92.39, 92.65, 93.12, 93.94, respectively.It is possible to build a baseline model according to a physician's style, then progressively train the style adaption model of another physician's style by using deep active learning and attention mechanism. By increasing the sample size of 20-30 cases, the DSC of the style adaption model would reach to more than 90.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call