Abstract
Radiotherapy is one of most treatments for tumors. To accurately control the radiation dose distribution and lessen the radiation damage to normal tissues and organs in radiotherapy, it is essential to delineate organs at risk (OARs) precisely. However, manual delineating and some traditional methods are labor-intensive and time-consuming. There is an urgent need for fast and precise segmentation methods in radiotherapy. This paper proposes a fully automatic segmentation method based on the 3D U-Net for multi-organ in head and neck. It introduces squeeze-and-attention blocks to gather multi-scale context information and the receptive field block to balance the performance between large-sized and small-sized organs. Furthermore, it is trained by the marginal and exclusion loss function in a partially supervised learning mode. We evaluated the model with dice similarity coefficient (DSC), 95% Hausdorff distance (95HD) and inference time. Its average DSC is 0.829, which is 4.5%, 3.2%, and 2.4% higher than AnatomyNet's, nnU-net's, and FocusNet's, respectively, and its average 95HD is 2.19. Moreover, its inference time and parameters are 63% and 60% less than FocusNetv2's. For the segmentation of OARs in head and neck, our model is more accurate than AnatomyNet, faster than FocusNetv2, and better balances between segmentation accuracy and inference time. It demonstrates that our method is more applicable for clinical treatment.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Computer Assisted Radiology and Surgery
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.