Abstract Background The pursuit of automated methods to assess the extent of resection (EOR) in glioblastomas is challenging, requiring precise measurement of residual tumor volume. Many algorithms focus on preoperative scans, making them unsuitable for postoperative studies. Our objective was to develop a deep learning-based model for postoperative segmentation using magnetic resonance imaging (MRI). We also compared our model's performance with other available algorithms. Methods To develop the segmentation model, a training cohort from three research institutions and three public databases was used. Multiparametric MRI scans with ground truth labels for contrast enhancing tumor, edema, and surgical cavity, served as training data. The models were trained using MONAI and nnU-Net frameworks. Comparisons were made with currently available segmentation models using an external cohort from a research institution and a public database. Additionally, the model's ability to classify EOR was evaluated using the RANO-Resect classification system. To further validate our best-trained model, an additional independent cohort was used. Results The study included 586 scans: 395 for model training, 52 for model comparison, and 139 scans for independent validation. The nnU-Net framework produced the best model with median Dice scores of 0.81 for contrast enhancing tumor, 0.77 for edema, and 0.81 for surgical cavity. Our best-trained model classified patients into maximal and submaximal resection categories with 96% accuracy in the model comparison dataset and 84% in the independent validation cohort. Conclusion Our nnU-Net-based model outperformed other algorithms in both segmentation and EOR classification tasks, providing a freely accessible tool with promising clinical applicability.
Read full abstract