Abstract

Imaging-based differentiation between glioblastoma (GB) and brain metastases (BM) remains challenging. Our aim was to evaluate the performance of 3D-convolutional neural networks (CNN) to address this binary classification problem. T1-CE, T2WI, and FLAIR 3D-segmented masks of 307 patients (157 GB and 150 BM) were generated post resampling, co-registration normalization and semi-automated 3D-segmentation and used for internal model development. Subsequent external validation was performed on 59 cases (27GB and 32 BM) from another institution. Four different mask-sequence combinations were evaluated using area under the curve (AUC), precision, recall and F1-scores. Diagnostic performance of a neuroradiologist and a general radiologist, both without and with the model output available, was also assessed. 3D-model using the T1-CE tumor mask (TM) showed the highest performance [AUC 0.93 (95% CI 0.858-0.995)] on the external test set, followed closely by the model using T1-CE TM and FLAIR mask of peri-tumoral region (PTR) [AUC of 0.91 (95% CI 0.834-0.986)]. Models using T2WI masks showed robust performance on the internal dataset but lower performance on the external set. Both neuroradiologist and general radiologist showed improved performance with model output provided [AUC increased from 0.89 to 0.968 (p=0.06) and from 0.78 to 0.965 (p=0.007) respectively], the latter being statistically significant. 3D-CNNs showed robust performance for differentiating GB from BMs, with T1-CE TM, either alone or combined with FLAIR-PTR masks. Availability of model output significantly improved the accuracy of the general radiologist.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call