Abstract
Organs-at-risk (OARs) segmentation in computed tomography (CT) is a fundamental step in the radiotherapy workflow, which has been prone as a time-consuming and labor-intensive task. Deep neural networks (DNNs) have gained significant popularity in the field of OAR segmentation tasks, achieving remarkable progress in clinical practice. Typically, OARs are distributed throughout different areas of the body and require varying thicknesses of CT scans for better diagnosis and segmentation in clinical. Most DNN-based segmentation focuses on single-thickness CT scans, limiting their applicability to varying thicknesses due to a lack of diverse thickness-related feature learning. While pre-training with the denoising diffusion probabilistic model (DDPM) offers an effective solution for dense feature learning, current works are constrained in addressing feature diversity, as exemplified by scenarios such as multi-thickness CT. To address the above challenges, this paper introduces a novel pre-training approach called DiffMT. This approach leverages the DDPM to extract valuable features from multi-thickness CT images. By transferring the pre-trained DDPM to the downstream segmentation for fine-tuning, the model gains proficiency in learning diverse multi-thickness CT features, leading to precise segmentation across varied thicknesses. We explore DiffMT’s feature learning capacity through experiments involving pre-trained models of varying sizes and different denoising thicknesses. Subsequently, thorough experiments comparing DDPM-based segmentation with other state-of-the-art (SOTA) CT segmentation methods, along with assessments on diverse OARs and modalities, empirically demonstrate that the proposed DiffMT method outperforms the control methods. The codes are available at https://github.com/ychengrong/DiffMT.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.