Abstract
Ultrasound (US) imaging has been widely used in image-guided prostate brachytherapy. Current prostate brachytherapy uses transrectal US (TRUS) images for implant guidance, where contours of prostate and organs-at risk are necessary for treatment planning and dose evaluation. This work aims to develop a deep learning-based method for pelvic multi-organ TRUS segmentation to improve TRUS-guided prostate brachytherapy.We developed an anchor-free mask convolutional neural network (CNN) that consist of three subnetworks, i.e., a backbone, a fully convolutional one-state object detector (FCOS) head, and a mask head. The backbone extracts multi-level and multi-scale features from US image. The FOCS head utilizes these features to detect the location of volume-of-interest (VOI) of each organ. In contrast to previous mask regional CNN (Mask R-CNN) method, which perform detection and segmentation within several pre-defined potential VOIs (called as anchors), FCOS head is anchor-free, which can capture spatial correlation of multi-organs due to whole image input. Mask head performs segmentation on each detected VOI, where a spatial attention strategy is integrated into mask head that helps to focus on informative features that can well-represent the organ boundary and suppress noise introduced by previous subnetwork and by arrival TRUS image itself. For evaluation, we retrospectively investigated 80 prostate cancer patients by a five-fold cross-validation. Prostate, bladder, rectum and urethra were segmented and compared with manual contours using Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and mean surface distance (MSD).For all patients, the DSC, HD95 and MSD were 0.93 ± 0.03, 2.28 ± 0.64 mm and 0.57 ± 0.20 mm for prostate; 0.75 ± 0.12, 2.58 ± 0.7 mm and 1.26 ± 0.23 mm for bladder; 0.90 ± 0.07, 1.65 ± 0.52 mm and 0.34 ± 0.16 mm for rectum; and 0.86 ± 0.07, 1.85 ± 1.71 mm and 0.44 ± 0.32mm for urethra. The proposed method outperforms two state-of-art methods (U-Net and Mask R-CNN) with better agreement with physicians' manual contours and less mis-identified speckles.We proposed an automatic deep learning-based pelvic multi-organ segmentation on 3D TRUS images for TRUS-guided prostate brachytherapy. The proposed method could provide fast and accurate multi-organ segmentations for prostate, bladder, rectum and urethra, and outperformed two competing methods. This deep-learning-based multi-organ auto-segmentation may play an important role in future clinical practice to improve auto-planning and auto-evaluation during TRUS-guided prostate brachytherapy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Radiation Oncology*Biology*Physics
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.