Abstract

Tumor lesion segmentation and staging in cancer patients are one of the most challenging tasks for radiologists to recommend better treatment planning like radiation therapy, personalized medicine, and surgery. Recently, Deep Learning (DL) has emerged as an assistive technology to help radiologists to characterize the biology of tumors and manage cancer patients. Positron Emission Tomography/Computed Tomography (PET/CT) multi-modality image-based tumor segmentation has gained tremendous attraction. However, the fusion of PET and CT information exposes numerous serious challenges including intra-class variability, contrast issues, modality discrepancy (difference in shape, and size of tumor), and the blurred boundaries between tumor and normal tissues (low specificity). To address these challenges, various DL-based tumor auto-segmentation methods have been proposed to consider complementary and contradictory anatomical and functional information of multi-modality PET/CT. This survey paper provides an in-depth exploration of these auto-segmentation methods. First, we discuss PET, CT weaknesses, the need for PET/CT, and the challenge of multi-modality PET/CT images. Second, we provide a detailed discussion of the parameters used to evaluate the achievements and limitations of the reviewed methods. Third, we classify the existing solutions into three major groups based on the model architecture design such as single network, multiple networks, and hybrid network models. The multiple networks are further divided into ensembles, multi-task, and Generative Adversarial Network (GAN) models. Furthermore, we present a discussion on these solutions to improve segmentation performance along with their strengths and weaknesses. Finally, we present a discussion on open research challenges and recommend potential future directions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call