Abstract

Nasopharyngeal carcinoma (NPC) is a malignant tumor, and the main treatment is radiotherapy. Accurate delineation of the target tumor is essential for radiotherapy of NPC. NPC tumors are small in size and vary widely in shape and structure, making it a time-consuming and laborious task for even experienced radiologists to manually outline tumors. However, the segmentation performance of current deep learning models is not satisfactory, mainly manifested by poor segmentation boundaries. To solve this problem, this paper proposes a segmentation method for nasopharyngeal carcinoma based on dynamic PET-CT image data, whose input data include CT, PET, and parametric images (Ki images). This method uses a generative adversarial network with a modified UNet integrated with a Transformer as the generator (TG-Net) to achieve automatic segmentation of NPC on combined CT-PET-Ki images. In the coding stage, TG-Net uses moving windows to replace traditional pooling operations to obtain patches of different sizes, which can reduce information loss in the coding process. Moreover, the introduction of Transformer can make the network learn more representative features and improve the discriminant ability of the model, especially for tumor boundaries. Finally, the results of fivefold cross validation with an average Dice similarity coefficient score of 0.9135 show that our method has good segmentation performance. Comparative experiments also show that our network structure is superior to the most advanced methods in the segmentation of NPC. In addition, this work is the first to use Ki images to assist tumor segmentation. We also demonstrated the usefulness of adding Ki images to aid in tumor segmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call