Accurately segmenting and staging tumor lesions in cancer patients presents a significant challenge for radiologists, but it is essential for devising effective treatment plans including radiation therapy, personalized medicine, and surgical options. The integration of artificial intelligence (AI), particularly deep learning (DL), has become a useful tool for radiologists, enhancing their ability to understand tumor biology and deliver personalized care to patients with H&N tumors. Segmenting H&N tumor lesions using Positron Emission Tomography/Computed Tomography (PET/CT) images has gained significant attention. However, the diverse shapes and sizes of tumors, along with indistinct boundaries between malignant and normal tissues, present significant challenges in effectively fusing PET and CT images. To overcome these challenges, various DL-based models have been developed for segmenting tumor lesions in PET/CT images. This article reviews multimodality (PET/CT) based H&N tumor lesions segmentation methods. We firstly discuss the strengths and limitations of PET/CT imaging and the importance of DL-based models in H&N tumor lesion segmentation. Second, we examine the current state-of-the-art DL models for H&N tumor segmentation, categorizing them into UNet, VNet, Vision Transformer, and miscellaneous models based on their architectures. Third, we explore the annotation and evaluation processes, addressing challenges in segmentation annotation and discussing the metrics used to assess model performance. Finally, we discuss several open challenges and provide some avenues for future research in H&N tumor lesion segmentation.
Read full abstract