Automatic primary gross tumor volume (GTVp) segmentation for nasopharyngeal carcinoma (NPC) is a quite challenging task because of the existence of similar visual characteristics between tumors and their surroundings, especially on computed tomography (CT) images with severe low contrast resolution. Therefore, most recently proposed methods based on radiomics or deep learning (DL) is difficult to achieve good results on CT datasets. A peritumoral radiomics-guided generative adversarial network (PRG-GAN) was proposed to address this challenge. A total of 157 NPC patients with CT images was collected and divided into training, validation, and testing cohorts of 108, 9, and 30 patients, respectively. The proposed model was based on a standard GAN consisting of a generator network and a discriminator network. Morphological dilation on the initial segmentation results from GAN was first conducted to delineate annular peritumoral region, in which radiomics features were extracted as priori guide knowledge. Then, radiomics features were fused with semantic features by the discriminator's fully connected layer to achieve the voxel-level classification and segmentation. The dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and average symmetric surface distance (ASSD) were used to evaluate the segmentation performance using a paired samples t-test with Bonferroni correction and Cohen's d (d) effect sizes. A two-sided p-value of less than 0.05 was considered statistically significant. The model-generated predictions had a high overlap ratio with the ground truth. The average DSC, HD95, and ASSD were significantly improved from 0.80±0.12, 4.65±4.71mm, and 1.35±1.15mm of GAN to 0.85±0.18 (p=0.001, d=0.71), 4.15±7.56mm (p=0.002, d=0.67), and 1.11±1.65mm (p<0.001, d=0.46) of PRG-GAN, respectively. Integrating radiomics features into GAN is promising to solve unclear border limitations and increase the delineation accuracy of GTVp for patients with NPC.
Read full abstract