Abstract

Accurate delineation of the clinical target volume (CTV) and organs at risk (OARs) in pelvic plays a crucial role in the follow-up radiotherapy of the patient after hysterectomy. However, manual delineation is a time-consuming task which is susceptible to inter-observer variation. Automatic organ delineation is reliant on the global context contained in the extracted feature map to segment the original CT image. Currently, numerous deep learning networks, like U-Net++, are widely used in this task. In this paper, we use a standard U-Net++ as a fundamental structure followed by using dilated convolutions to replace its standard convolutions. We observed that dilated convolutions enlarged the size of the receptive field significantly, fusing more global contextual information and increasing the accuracy of segmenting organs. We evaluate the performance of standard U-Net++ and improved U-Net++(with double-deck dilated convolution) on the multiple organs segmentation in pelvic respectively. Our data set consist of CT images of 70 patients after hysterectomy. We use the dice similarity coefficient to quantify the segmentation accuracy. The result of our experiment demonstrates that the improved U-Net++ outperforms the standard U-Net++ on the segmentation of bladder, CTV and rectum. The dice scores show in follow: 93.2%±4.2% vs 91.2%±3.7% for bladder, 89%±3.6% vs 85%±2.6% for CTV and 87.6%±3.6% vs 84.7%±2.1% for rectum. It shows that the segmentation result of the improved U-Net ++ network is closer to the results manually drawn by the doctor than the standard U-Net++.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call