Abstract

Abstract Identification of key phenotypic regions such as necrosis, contrast enhancement, and edema on magnetic resonance imaging (MRI) is important for understanding disease evolution and treatment response in patients with glioma. Manual delineation is time intensive and not feasible for a clinical workflow. Automating phenotypic region segmentation overcomes many issues with manual segmentation, however, current glioma segmentation datasets focus on pre-treatment, diagnostic scans, where treat-ment effects and surgical cavities are not present. Thus, existing automatic segmenta-tion models are not applicable to post-treatment imaging that is used for longitudinal evaluation of care. Here, we present a comparison of three-dimensional convolutional neural networks (nnU-Net architecture) trained on large temporally defined pre-treatment, post-treatment, and mixed cohorts. We used a total of 1563 imaging timepoints from 854 patients curated from 13 different institutions as well as diverse public data sets to understand the capabilities and limitations of automatic segmenta-tion on glioma images with different phenotypic and treatment appearance. We as-sessed the performance of models using Dice coefficients on test cases from each group comparing predictions with manual segmentations generated by trained techni-cians. We demonstrate that training a combined model can be as effective as models trained on just one temporal group. The results highlight the importance of a diverse training set, that includes images from the course of disease and with effects from treatment, in the creation of a model that can accurately segment glioma MRIs at multiple treatment time points.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.