Radiomics-based machine learning (ML) models of amino acid positron emission tomography (PET) images have shown efficiency in glioma prediction tasks. However, their clinical impact on physician interpretation remains limited. This study investigated whether an explainable radiomics model modifies nuclear physicians' assessment of glioma aggressiveness at diagnosis. Patients underwent dynamic 6-[18F]fluoro-L-DOPA PET acquisition. With a 75%/25% split for training (n = 63) and test sets (n = 22), an ensemble ML model was trained using radiomics features extracted from static/dynamic parametric PET images to classify lesion aggressiveness. Three explainable ML methods-Local Interpretable Model-agnostic Explanations (LIME), Anchor, and SHapley Additive exPlanations (SHAP)-generated patient-specific explanations. Eighteen physicians from eight institutions evaluated the test samples. During the first phase, physicians analyzed the 22 cases exclusively through magnetic resonance and static/dynamic PET images, acquired within a maximum interval of 30 days. In the second phase, the same physicians reevaluated the same cases (n = 22), using all available data, including the radiomics model predictions and explanations. Eighty-five patients (54[39-62] years old, 41 women) were selected. In the second phase, physicians demonstrated a significant improvement in diagnostic accuracy compared to the first phase (0.775 [0.750-0.802] vs. 0.717 [0.694-0.737], p = 0.007). The explainable radiomics model augmented physician agreement, with a 22.72% increase in Fleiss's kappa, and significantly enhanced physician confidence (p < 0.001). Among all physicians, Anchor and SHAP showed efficacy in 75% and 72% of cases, respectively, outperforming LIME (p ≤ 0.001). Our results highlight the potential of an explainable radiomics model using amino acid PET scans as a diagnostic support to assist physicians in identifying glioma aggressiveness.
Read full abstract