Objective: To develop and validate an artificial intelligence (AI) diagnostic model for coronary artery disease based on facial photos. Methods: This study was a cross-sectional study. Patients who were scheduled to undergo coronary angiography (CAG) at Beijing Anzhen Hospital and Beijing Daxing Hospital from August 2022 to November 2023 were included consecutively. Before CAG, facial photos were collected (including four angles: frontal view, left and right 60° profile, and top of the head). Photo datasets were randomly divided into a training set, a validation set (70%), and a testing set (30%). The model was constructed using Masked Autoencoder (MAE) and Vision Transformer (ViT) architectures. Firstly, the model base was pre-training using 2 million facial photos obtained from the publicly available VGGFace dataset, and fine-tuned by the training and validation sets; the model was validated in the test set. In addition, the ResNet architecture was used to process the dataset, and its outputs were compared with those of the models based on MAE and ViT. In the test set, the area under the operating characteristic curve (AUC) of the AI model was calculated using CAG results as the gold standard. Results: A total of 5 974 participants aged 61 (54, 67) years were included, including 4 179 males (70.0%), with a total of 84 964 facial photos. There were 79 140 facial photos in the training and validation sets, with 3 822 patients with coronary artery disease; there were 5 824 facial photos in the test set, with 239 patients with coronary artery disease. The AUC value of the MAE and ViT model initialized with pre-training model weights was 0.841 and 0.824, respectively. The AUC of the ResNet model initialized with random weights was 0.810, while the AUC of the ResNet model initialized with pre-training model weights was 0.816. Conclusion: The AI model based on facial photos showes good diagnostic performance for coronary artery disease and holds promise for further application in early diagnosis.
Read full abstract