Abstract

AbstractQuantitative positron emission tomography/computed tomography (PET/CT), owing to the functional metabolic information and anatomical information of the human body that it presents, is useful to achieve accurate tumor delineation. However, manual annotation of a Volume Of Interest (VOI) is a labor-intensive and time-consuming task. In this study, we automatically segmented the Head and Neck (H&N) primary tumor in combined PET/CT images. Herein, we propose a convolutional neural network named Multimodal Spatial Attention Network (MSA-Net), supplemented with a Spatial Attention Module (SAM), which uses a PET image as an input. We evaluated this model on the MICCAI 2021 HEad and neCK TumOR (HECKTOR) segmentation challenge dataset. Our method delivered a competitive cross-validation efficiency, with Dice Similarity Coefficient (DSC) 0.757, precision 0.788, and recall 0.785. When we tested out method on test dataset, we achieved an average DSC and Hausdorff Distance at 95% (HD95) of 0.766 and 3.155 respectively. Our team name is ‘Heck_Uihak’.KeywordsMultimodal image segmentationPET-CTHead and neck segmentationHECKTOR 2021

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call