Abstract
Esophageal cancer is a common malignant tumor, precisely predicting survival of esophageal cancer is crucial for personalized treatment. However, current region of interest (ROI) based methodologies not only necessitate prior medical knowledge for tumor delineation, but may also cause the model to be overly sensitive to ROI. To address these challenges, we develop an automated Hybrid Transformer based learning that integrates a Hybrid Transformer size-aware U-Net with a ranked survival prediction network to enable automatic survival prediction for esophageal cancer. Specifically, we first incorporate the Transformer with shifted windowing multi-head self-attention mechanism (SW-MSA) into the base of the U-Net encoder to capture the long-range dependency in CT images. Furthermore, to alleviate the imbalance between the ROI and the background in CT images, we devise a size-aware coefficient for the segmentation loss. Finally, we also design a ranked pair sorting loss to more comprehensively capture the ranked information inherent in CT images. We evaluate our proposed method on a dataset comprising 759 samples with esophageal cancer. Experimental results demonstrate the superior performance of our proposed method in survival prediction, even without ROI ground truth.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.