Abstract

PurposeAutomated glioblastoma segmentation from magnetic resonance imaging is generally performed on a four-modality input, including T1, contrast T1, T2 and FLAIR. We hypothesize that information redundancy is present within these image combinations, which can possibly reduce a model’s performance. Moreover, for clinical applications, the risk of encountering missing data rises as the number of required input modalities increases. Therefore, this study aimed to explore the relevance and influence of the different modalities used for MRI-based glioblastoma segmentation.MethodsAfter the training of multiple segmentation models based on nnU-Net and SwinUNETR architectures, differing only in their amount and combinations of input modalities, each model was evaluated with regard to segmentation accuracy and epistemic uncertainty.ResultsResults show that T1CE-based segmentation (for enhanced tumor and tumor core) and T1CE-FLAIR-based segmentation (for whole tumor and overall segmentation) can reach segmentation accuracies comparable to the full-input version. Notably, the highest segmentation accuracy for nnU-Net was found for a three-input configuration of T1CE-FLAIR-T1, suggesting the confounding effect of redundant input modalities. The SwinUNETR architecture appears to suffer less from this, where said three-input and the full-input model yielded statistically equal results.ConclusionThe T1CE-FLAIR-based model can therefore be considered as a minimal-input alternative to the full-input configuration. Addition of modalities beyond this does not statistically improve and can even deteriorate accuracy, but does lower the segmentation uncertainty.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.