Abstract

Accurate breast cancer prognosis prediction can help clinicians to develop appropriate treatment plans and improve life quality for patients. Recent prognostic prediction studies suggest that fusing multi-modal data, e.g., genomic data and pathological images, plays a crucial role in improving predictive performance. Despite promising results of existing approaches, there remain challenges in effective multi-modal fusion. First, albeit a powerful fusion technique, Kronecker product produces high-dimensional quadratic expansion of features that may result in high computational cost and overfitting risk, thereby limiting its performance and applicability in cancer prognosis prediction. Second, most existing methods put more attention on learning cross-modality relations between different modalities, ignoring modality-specific relations that are complementary to cross-modality relations and beneficial for cancer prognosis prediction. To address these challenges, in this study we propose a novel attention-based multi-modal network to accurately predict breast cancer prognosis, which efficiently models both modality-specific and cross-modality relations without bringing in high-dimensional features. Specifically, two intra-modality self-attentional modules and an inter-modality cross-attentional module, accompanied by latent space transformation of channel affinity matrix, are developed to successfully capture modality-specific and cross-modality relations for efficient integration of genomic data and pathological images, respectively. Moreover, we design an adaptive fusion block to take full advantage of both modality-specific and cross-modality relations. Comprehensive experiment demonstrates that our method can effectively boost prognosis prediction performance of breast cancer and compare favorably with the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call