Abstract

Abstract AIMS Brain tumour segmentation remains a challenging task, complicated by the marked heterogeneity of imaging appearances and their distribution across multiple modalities: FLAIR, T1-weighted, T2-weighted, and contrast-enhanced T1-weighted sequences (T1CE). But the use of all four imaging sequences is not always possible. The causes for this are legion, with common examples including corruption by image artefacts and acquisition constraints, such as those imposed in pre-operative stealth studies. We therefore aimed to quantify how well tumour segmentation models perform with incomplete imaging data. METHOD We developed a collection of 30 state-of-the-art deep learning tumour segmentation models, nnU-Net-derived, and deployed them across all possible combinations of imaging modalities, trained, and tested with five-fold cross-validation on the 2021 BraTS-RSNA glioma population of 1251 patients, with additional out-of-sample comparison to neuroradiologist hand-labelled lesions from our own centre. RESULTS Regardless of imaging available, models largely performed well. The best models with varying degrees of missingness were as follows: single sequence available - FLAIR (Dice 0.938); two sequences available - FLAIR + T1CE (Dice 0.943) and three sequences available – FLAIR + T1CE + T2 (Dice 0.945). In comparison, a model with complete data (FLAIR + T1 + T1CE + T2) achieved a similar Dice coefficient of 0.945. CONCLUSION Tumour segmentation models with missing sequences – common in clinical practice - still delineate lesions well, often with comparable performance to when all data is available. This provides opportunity for quantitative imaging in patients and clinical situations wherein full MRI acquisitions are not possible.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.