Abstract

To develop a generative adversarial network (GAN)-based deep learning approach to estimate the multileaf collimator (MLC) aperture and corresponding monitor units (MUs) from a given three-dimensional (3D) dose distribution. We developed a treatment plan verification framework by implementing a modified pix2pix network to learn a mapping between CT, segment 3D dose and MUs/MLC shapes. The proposed design of adversarial network, which integrates a residual block into pix2pix framework, jointly trains a “U-Net”-like architecture as generator and a convolutional “PatchGAN” classifier as discriminator. 199 patients, including nasopharyngeal, lung and rectum, treated with intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT) techniques were utilized to train the network. The resliced volumetric dose of each segment and CT image datasets were labeled by the corresponding MUs/MLC maps. Additional 47 patients were used to test the prediction accuracy of the proposed deep learning model. The Dice similarity coefficient (DSC) was calculated to evaluate the similarity between the MLC aperture shapes obtained from the treatment planning system (TPS) and the deep learning prediction. The average and standard deviation of the bias between the TPS generated MUs and predicted MUs were calculated to evaluate the MU prediction accuracy. Additionally, the differences between TPS and deep learning-predicted MLC leaf positions were compared. The model performance was first evaluated by five simple fields, including four square fields and one irregular field. The DSC is found to be 0.99, 0.99, 0.99, 0.98 and 0.96, respectively. And the MUs prediction agree with the original plans to within 5%. The average and standard deviation of DSC was 0.94 ± 0.043 for 47 testing patients. The average deviation of predicted MUs from the planned MUs normalized to each beam or arc was within 2% for all the testing patients. The average deviation of the predicted MLC leaf positions was around one pixel for all the testing patients. A novel pix2pix deep learning network was developed to predict MUs/MLC shapes for treatment plan verification. It is the first attempt in using deep learning method to predict fundamental machine delivery parameters and it may provide radiation oncology community a useful tool for improving the efficiency and accuracy of patient specific QA and plan second check process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call