Explaining deep learning model decisions, especially those for medical image segmentation, is a critical step toward the understanding and validation that will enable these powerful tools to see more widespread adoption in healthcare. We introduce kernel-weighted contribution, a visual explanation method for three-dimensional medical image segmentation models that produces accurate and interpretable explanations. Unlike previous attribution methods, kernel-weighted contribution is explicitly designed for medical image segmentation models and assesses feature importance using the relative contribution of each considered activation map to the predicted segmentation. We evaluate our method on a synthetic dataset that provides complete knowledge over input features and a comprehensive explanation quality metric using this ground truth. Our method and three other prevalent attribution methods were applied to five different model layer combinations to explain segmentation predictions for 100 test samples and compared using this metric. Kernel-weighted contribution produced superior explanations of obtained image segmentations when applied to both encoder and decoder sections of a trained model as compared to other layer combinations (). In between-method comparisons, kernel-weighted contribution produced superior explanations compared with other methods using the same model layers in four of five experiments () and showed equivalently superior performance to GradCAM++ when only using non-transpose convolution layers of the model decoder (). The reported method produced explanations of superior quality uniquely suited to fully utilize the specific architectural considerations present in image and especially medical image segmentation models. Both the synthetic dataset and implementation of our method are available to the research community.
Read full abstract