Abstract

Adaptive radiotherapy (ART) workflows have been increasingly adopted to achieve dose escalation and tissue sparing under shifting anatomic conditions, but the necessity of recontouring and the associated time burden hinders a real-time or online ART workflow. In response to this challenge, approaches to auto-segmentation involving deformable image registration, atlas-based segmentation, and deep learning-based segmentation (DLS) have been developed. Despite the particular promise shown by DLS methods, implementing these approaches in a clinical setting remains a challenge, namely due to the difficulty of curating a data set of sufficient size and quality so as to achieve generalizability in a trained model. To address this challenge, we have developed an intentional deep overfit learning (IDOL) framework tailored to the auto-segmentation task. However, certain limitations were identified, particularly the insufficiency of the personalized dataset to effectively overfit the model. In this study, we introduce a personalized hyperspace learning (PHL)-IDOL segmentation framework capable of generating datasets that induce the model to overfit specific patient characteristics for medical image segmentation. The PHL-IDOL model is trained in two stages. In the first, a conventional, general model is trained with a diverse set of patient data (n=100 patients) consisting of CT images and clinical contours. Following this, the general model is tuned with a data set consisting of two components: (a) selection of a subset of the patient data (m<n) using the similarity metrics (mean square error (MSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and the universal quality image index (UQI) values); (b) adjust the CT and the clinical contours using a deformed vector generated from the reference patient and the selected patients using (a). After training, the general model, the continual model, the conventional IDOL model, and the proposed PHL-IDOL model were evaluated using the volumetric dice similarity coefficient (VDSC) and the Hausdorff distance 95% (HD95%) computed for 18 structures in 20 test patients. Implementing the PHL-IDOL framework resulted in improved segmentation performance for each patient. The Dice scores increased from 0.81 0.05 with the general model, 0.83 for the continual model, 0.83 for the conventional IDOL model to an average of 0.87 with the PHL-IDOL model. Similarly, the Hausdorff distance decreased from 3.06 with the general model, 2.84 for the continual model, 2.79 for the conventional IDOL model and 2.36 for the PHL-IDOL model. All the standard deviations were decreased by nearly half of the values comparing the general model and the PHL-IDOL model. The PHL-IDOL framework applied to the auto-segmentation task achieves improved performance compared to the general DLS approach, demonstrating the promise of leveraging patient-specific prior information in a task central to online ART workflows.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.