Abstract

Adjuvant and salvage radiotherapy after radical prostatectomy requires precise delineations of prostate bed (PB), i.e., the clinical target volume, and surrounding organs at risk (OARs) to optimize radiotherapy planning. Segmenting PB is particularly challenging even for clinicians, e.g., from the planning computed tomography (CT) images, as it is an invisible/virtual target after the operative removal of the cancerous prostate gland. Very recently, a few deep learning-based methods have been proposed to automatically contour non-contrast PB by leveraging its spatial reliance on adjacent OARs (i.e., the bladder and rectum) with much more clear boundaries, mimicking the clinical workflow of experienced clinicians. Although achieving state-of-the-art results from both the clinical and technical aspects, these existing methods improperly ignore the gap between the hierarchical feature representations needed for segmenting those fundamentally different clinical targets (i.e., PB and OARs), which in turn limits their delineation accuracy. This paper proposes an asymmetric multi-task network integrating dynamic cross-task representation adaptation (i.e., DyAdapt) for accurate and efficient co-segmentation of PB and OARs in one-pass from CT images. In the learning-to-learn framework, the DyAdapt modules adaptively transfer the hierarchical feature representations from the source task of OARs segmentation to match up with the target (and more challenging) task of PB segmentation, conditioned on the dynamic inter-task associations learned from the learning states of the feed-forward path. On a real-patient dataset, our method led to state-of-the-art results of PB and OARs co-segmentation. Code is available at https://github.com/ladderlab-xjtu/DyAdapt.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call