Abstract
Acute Stanford Type A aortic dissection (AAD-type A) and acute myocardial infarction (AMI) present with similar symptoms but require distinct treatments. Efficient differentiation is critical due to limited access to radiological equipment in many primary healthcare. This study develops a multimodal deep learning model integrating electrocardiogram (ECG) signals and laboratory indicators to enhance diagnostic accuracy for AAD-type A and AMI. We gathered ECG and laboratory data from 136 AAD-type A and 141 AMI patients at Zigong Fourth People's Hospital (January 2019 to December 2023) for training and validation. Utilizing ResNet-34 (residual network), we extracted ECG features and combined them with laboratory and demographic data. We assessed logistic regression, RandomForest, XGBoost, and LightGBM models, employing shapley additive explanations (SHAP) for feature importance analysis. Data from 30 AMI and 32 AAD-type A patients (January to September 2024) were used as a prospective test set. Incorporating ECG features significantly improved model's AUC value, with the RandomForest achieving the best performance (AUC 0.98 on validation, 0.969 on test). SHAP analysis revealed that troponin and D-dimer, along with the embedding features of ECG extracted by the deep neural network, are key characteristics for differentiating AAD-type A and AMI. ECG features are valuable for distinguishing AAD-type A and AMI, offering a novel tool for rapid cardiovascular disease diagnosis through multimodal data fusion and deep learning.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have