Segmentation of head and neck (H&N) organs at risk (OARs) is an intricate process, we propose Contest- extractor Attention U-Net (CAU-Net) to improve the segmentation accuracy of OARs contours by addressing the problem of limited accuracy of 2D segmentation algorithms in current radiotherapy techniques. A total of 60 patients from CSTRO2019's H&N dataset containing 22 organs, were available to train and evaluate a prototype deep learning-based normal tissue 2D auto-segmentation algorithm. Our CAU-Net is based on U-Net, by using the edge attention module to enhance the boundary representation, the null convolution block in the context extraction module to encode high-level semantic feature information, and the convolution of the sensory field to assign to different targets. A Dice loss function combine contour loss function was used in training the models. The contour loss function was improved to segment the target regions by the weights of different organ occurrences and the region assignment of false positives and false negatives to accurately predict the boundary structure. The OARs were delineated by a single experienced physician. A subset of 10 cases was withheld from training and used for validation. On those, we set three different deep-learning networks trained with CSTRO and compared them to the gold data: A) CAU-Net, B) nnUNet, and C) UNet++. To test its applicability, we used another public H&N dataset Public Domain Database for Computational Anatomy (PDDCA) containing 8 organs with 47 patients, among which 10 cases were used for validation: D) CAU-Net with the PDDCA, E) UNet2022 with the PDDCA. The Dice similarity coefficient (DSC) was used to measure the overlap between the results of the gold data and the automated segmentations. The average DSC scores for method A, B, and C across all OARs in the 10 evaluation cases were 0.67±0.08, 0.58±0.11 and 0.62±0.12, respectively. The difference in mean DSC scores was significant (p<0.05). The A/B difference was significant in Lens-L, Lens-R and Pituitary. Method A scored the highest DSC in all OARs except for the Spinal Cord, Mandible-L and Mandible-R. 16 OARs showed DSC≥0.6 on CSTRO. Method D, and E achieved 0.84±0.10 and 0.83±0.09 average DSC respectively. All OARs showed DSC≥0.7 on PDDCA. The CAU-Net proposed by us achieved better results than the baseline network for H&N OAR segmentation. This new development will provide the possibility of H&N organ segmentation and rapid diagnosis of radiotherapy. All the networks trained with PDDCA scored higher than CSTRO. Auto segmentation results can differ significantly when the same algorithm is trained on data from different institutions.
Read full abstract