Abstract

Cone-beam computed tomography (CBCT) is increasingly used in radiotherapy for patient alignment and adaptive therapy where organ segmentation and target delineation are often required. However, due to the poor image quality, low soft tissue contrast, as well as the difficulty in acquiring segmentation labels on CBCT images, developing effective segmentation methods on CBCT has been a challenge. In this paper, we propose a deep model for segmenting organs in CBCT images without requiring labelled training CBCT images. By taking advantage of the available segmented computed tomography (CT) images, our adversarial learning domain adaptation method aims to synthesize CBCT images from CT images. Then the segmentation labels of the CT images can help train a deep segmentation network for CBCT images, using both CTs with labels and CBCTs without labels. Our adversarial learning domain adaptation is integrated with the CBCT segmentation network training with the designed loss functions. The synthesized CBCT images by pixel-level domain adaptation best capture the critical image features that help achieve accurate CBCT segmentation. Our experiments on the bladder images from Radiation Oncology clinics have shown that our CBCT segmentation with adversarial learning domain adaptation significantly improves segmentation accuracy compared to the existing methods without doing domain adaptation from CT to CBCT.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.