Atrial fibrillation (AF) is a common arrhythmia disease with a higher incidence rate. The diagnosis of AF is time-consuming. Although many ECG classification models have been proposed to assist in AF detection, they are prone to misclassifying indistinguishable noise signals, and the context information of long-term signals is also ignored, which impacts the performance of AF detection. Considering all the above problems, we propose a knowledge embedded multimodal pseudo-siamese model. The proposed model comprises a temporal-spatial pseudo-siamese network (TSPS-Net) and a knowledge embedded noise filter module. Firstly, a parallel siamese network architecture is proposed in TSPS-Net to process the multimodal representations. Secondly, a spatiotemporal collaborative fusion mechanism (STCFM) is proposed to fuse multimodal features. Finally, medical knowledge is introduced to design manual features, which are used to distinguish noise and fuse with deep features of ECG to obtain the accurate final result. The model’s performance is verified on the CinC 2017 dataset and the MIT-BIH AF dataset. Experimental results showed that the average accuracy achieved 82.17 and 99.11. The F1 score of our proposed model on the CinC 2017 dataset and MIT-BIH dataset was 0.787 and 0.970, respectively.
Read full abstract