BackgroundElectrocardiogram (ECG) is a critical diagnostic tool used for screening atrial fibrillation (AF). In recent years, several deep learning-based ECG automatic diagnosis methods have been proposed and have yielded satisfactory results. However, the lack of interpretability in these systems limits their clinical usage. MethodsTo address this issue, we propose a medical explainable AF diagnosis system (M-XAF) that not only predicts AF diagnostic results but also generates an analysis report of the ECG medical features that influenced the classification model's decision. Using a novel ECG signal features occlusion analysis method, M-XAF establishes a connection between the medical knowledge space and the representation space from the convolutional neural network (CNN) by extracting understandable semantic sensitive channels (SSCs) via gradient-weighted channel sensitivity. Through the visualization and statistical modeling of SSCs, M-XAF provides multimodal explanations, including ECG feature descriptions, regions of interest (ROI), and class activation mapping (CAM), for AF diagnostic results. ConclusionThe experimental results demonstrate that the M-XAF has achieved satisfactory results in the AF identification task, with accuracy, specificity, PPV, F1, and MCC reaching over 0.9715, 0.9896, 0.9778, 0.9567, and 0.9365 on two public ECG datasets. In addition, M-XAF generates quantitative explainable reports and CAM for each predicted sample from the perspective of classification models. Furthermore, the explainable reports provided by M-XAF can address the high-confidence problem in classification models. Ultimately, the explainable framework in M-XAF can be widely applied in AI-ECG classification scenarios.