Abstract

Background and Objective: Accurate overall survival (OS) prediction for lung cancer patients is of great significance, which can help classify patients into different risk groups to benefit from personalized treatment. Histopathology slides are considered the gold standard for cancer diagnosis and prognosis, and many algorithms have been proposed to predict the OS risk. Most methods rely on selecting key patches or morphological phenotypes from whole slide images (WSIs). However, OS prediction using the existing methods exhibits limited accuracy and remains challenging.Methods: In this paper, we propose a novel cross-attention-based dual-space graph convolutional neural network model (CoADS). To facilitate the improvement of survival prediction, we fully take into account the heterogeneity of tumor sections from different perspectives. CoADS utilizes the information from both physical and latent spaces. With the guidance of cross-attention, both the spatial proximity in physical space and the feature similarity in latent space between different patches from WSIs are integrated effectively.Results: We evaluated our approach on two large lung cancer datasets of 1044 patients. The extensive experimental results demonstrated that the proposed model outperforms state-of-the-art methods with the highest concordance index.Conclusions: The qualitative and quantitative results show that the proposed method is more powerful for identifying the pathology features associated with prognosis. Furthermore, the proposed framework can be extended to other pathological images for predicting OS or other prognosis indicators, and thus delivering individualized treatment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call