Abstract

AbstractImage quality assessment (IQA) of fundus images constitutes a foundational step in automated disease analysis. This process is pivotal in supporting the automation of screening, diagnosis, follow-up, and related academic research for diabetic retinopathy (DR). This study introduced a deep learning-based approach for IQA of ultra-widefield optical coherence tomography angiography (UW-OCTA) images of patients with DR. Given the novelty of ultra-widefield technology, its limited prevalence, the high costs associated with equipment and operational training, and concerns regarding ethics and patient privacy, UW-OCTA datasets are notably scarce. To address this, we initially pre-train a vision transformer (ViT) model on a dataset comprising 6 mm × 6 mm OCTA images, enabling the model to acquire a fundamental understanding of OCTA image characteristics and quality indicators. Subsequent fine-tuning on 12 mm × 12 mm UW-OCTA images aims to enhance accuracy in quality assessment. This transfer learning strategy leverages the generic features learned during pre-training and adjusts the model to evaluate UW-OCTA image quality effectively. Experimental results demonstrate that our proposed method achieves superior performance compared to ResNet18, ResNet34, and ResNet50, with an AUC of 0.9026 and a Kappa value of 0.7310. Additionally, ablation studies, including the omission of pre-training on 6 mm × 6 mm OCTA images and the substitution of the backbone network with the ViT base version, resulted in varying degrees of decline in AUC and Kappa values, confirming the efficacy of each module within our methodology.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call