Abstract

Accurate survival prediction is essential for precision oncology in patients with glioma. However, current deep learning-based survival analysis methods highly rely on segmented tumor regions, which requires tedious manual annotation. Semi-supervised segmentation offers an efficient way to reduce the annotation burden. However, most studies consider survival prediction and semi-supervised segmentation as two separated problems. Here, we proposed a multi-task learning approach for concurrent survival prediction and semi-supervised tumor segmentation. We train a shared multi-modal Transformer encoder to extract features from multiple modalities and fuse them at different levels. The extracted features are employed to construct contrast learning loss and survival analysis loss to implement semi-supervised segmentation and survival analysis, respectively. Experiments are conducted on two datasets from two local hospitals. Our method achieves comparable or slightly better results than state-of-the-art semi-supervised segmentation methods and achieves acceptable survival analysis results. Our data suggests that the proposed multi-task architecture can enhance both segmentation and survival prediction tasks in a semi-supervised learning manner.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call