Abstract

Accurate survival prediction is essential for precision oncology in patients with glioma. However, current deep learning-based survival analysis methods highly rely on segmented tumor regions, which requires tedious manual annotation. Semi-supervised segmentation offers an efficient way to reduce the annotation burden. However, most studies consider survival prediction and semi-supervised segmentation as two separated problems. Here, we proposed a multi-task learning approach for concurrent survival prediction and semi-supervised tumor segmentation. We train a shared multi-modal Transformer encoder to extract features from multiple modalities and fuse them at different levels. The extracted features are employed to construct contrast learning loss and survival analysis loss to implement semi-supervised segmentation and survival analysis, respectively. Experiments are conducted on two datasets from two local hospitals. Our method achieves comparable or slightly better results than state-of-the-art semi-supervised segmentation methods and achieves acceptable survival analysis results. Our data suggests that the proposed multi-task architecture can enhance both segmentation and survival prediction tasks in a semi-supervised learning manner.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.