Abstract

We address two long-standing radiogenomic challenges in glioma subtype and survival prediction: (1) how to leverage large amounts of unlabeled magnetic resonance imaging (MRI) data and (2) how to unite MRI data and genomic data. We propose a novel application of multi-task learning (MTL) that leverages unlabeled MRI data by jointly learning an auxiliary tumor segmentation task with glioma subtype prediction and that can learn from patients with and without genomic data. We analyze multi-parametric MRI data from 542 patients in the combined training, validation, and testing sets of the 2018 Multimodal Brain Tumor Segmentation Challenge and somatic copy number alteration (SCNA) data from 1090 patients in The Cancer Genome Atlas' (TCGA) lower-grade glioma and glioblastoma projects. Our MTL model significantly outperforms comparable classification models trained only on labeled MRI data for both IDH1/2 mutation and 1p/19q co-deletion subtype prediction tasks. We also show that embeddings produced by our MTL models improve survival predictions beyond MRI or SCNA on their own.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call