Abstract

Segmentation of head and neck (H &N) cancer primary tumor and lymph nodes on medical imaging is a routine part of radiation treatment planning for patients and may lead to improved response assessment and quantitative imaging analysis. Manual segmentation is a difficult and time-intensive task, requiring specialist knowledge. In the area of computer vision, deep learning-based architectures have achieved state-of-the-art (SOTA) performances for many downstream tasks, including medical image segmentation. Deep learning-based auto-segmentation tools may improve efficiency and robustness of H &N cancer segmentation. For the purpose of encouraging high performing methods for lesion segmentation while utilizing the bi-modal information of PET and CT images, the HEad and neCK TumOR (HECKTOR) challenge is offered annually. In this paper, we preprocess PET/CT images and train and evaluate several deep learning frameworks, including 3D U-Net, MNet, Swin Transformer, and nnU-Net (both 2D and 3D), to segment CT and PET images of primary tumors (GTVp) and cancerous lymph nodes (GTVn) automatically. Our investigations led us to three promising models for submission. Via 5-fold cross validation with ensembling and testing on a blinded hold-out set, we received an average of 0.77 and 0.70 using the aggregated Dice Similarity Coefficient (DSC) metric for primary and node, respectively, for task 1 of the HECKTOR2022 challenge. Herein, we describe in detail the methodology and results for our top three performing models that were submitted to the challenge. Our investigations demonstrate the versatility and robustness of such deep learning models on automatic tumor segmentation to improve H &N cancer treatment. Our full implementation based on the PyTorch framework and the trained models are available at https://github.com/xmuyzz/HECKTOR2022 (Team name: AIMERS).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call