Alzheimer disease (AD) is a progressive condition characterized by cognitive decline and memory loss. Vision transformers (ViTs) are emerging as promising deep learning models in medical imaging, with potential applications in the detection and diagnosis of AD. This review systematically examines recent studies on the application of ViTs in detecting AD, evaluating the diagnostic accuracy and impact of network architecture on model performance. We conducted a systematic search across major medical databases, including China National Knowledge Infrastructure, CENTRAL (Cochrane Central Register of Controlled Trials), ScienceDirect, PubMed, Web of Science, and Scopus, covering publications from January 1, 2020, to March 1, 2024. A manual search was also performed to include relevant gray literature. The included papers used ViT models for AD detection versus healthy controls based on neuroimaging data, and the included studies used magnetic resonance imaging and positron emission tomography. Pooled diagnostic accuracy estimates, including sensitivity, specificity, likelihood ratios, and diagnostic odds ratios, were derived using random-effects models. Subgroup analyses comparing the diagnostic performance of different ViT network architectures were performed. The meta-analysis, encompassing 11 studies with 95% CIs and P values, demonstrated pooled diagnostic accuracy: sensitivity 0.925 (95% CI 0.892-0.959; P<.01), specificity 0.957 (95% CI 0.932-0.981; P<.01), positive likelihood ratio 21.84 (95% CI 12.26-38.91; P<.01), and negative likelihood ratio 0.08 (95% CI 0.05-0.14; P<.01). The area under the curve was notably high at 0.924. The findings highlight the potential of ViTs as effective tools for early and accurate AD diagnosis, offering insights for future neuroimaging-based diagnostic approaches. This systematic review provides valuable evidence for the utility of ViT models in distinguishing patients with AD from healthy controls, thereby contributing to advancements in neuroimaging-based diagnostic methodologies. PROSPERO CRD42024584347; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=584347.
Read full abstract