ObjectiveDetecting and measuring changes in longitudinal fundus imaging is key to monitoring disease progression in chronic ophthalmic diseases, such as glaucoma and macular degeneration. Clinicians assess changes in disease status by either independently reviewing or manually juxtaposing longitudinally acquired color fundus photos (CFPs). Distinguishing variations in image acquisition due to camera orientation, zoom, and exposure from true disease-related changes can be challenging. This makes manual image evaluation variable and subjective, potentially impacting clinical decision making. We introduce our deep learning pipeline, “EyeLiner,” for registering, or aligning, two-dimensional CFPs. Improved alignment of longitudinal image pairs may compensate for differences that are due to camera orientation, while preserving pathological changes. DesignEyeLiner registers a “moving” image to a “fixed” image using a deep learning-based keypoint matching algorithm. ParticipantsWe evaluate EyeLiner on three longitudinal datasets: Fundus Image REgistration (FIRE), Sequential Fundus for Glaucoma Forecast (SIGF), and our internal glaucoma dataset from the Colorado Ophthalmology Research Information System (CORIS). MethodsAnatomical keypoints along the retinal blood vessels are detected from the moving and fixed images using a convolutional neural network and subsequently matched using a transformer-based algorithm. Finally, transformation parameters are learned using the corresponding keypoints. Main Outcome MeasuresWe computed the mean distance (MD) between manually annotated keypoints from the fixed and the registered moving image. For comparison to existing state-of-the-art retinal registration approaches, we used the mean area under the curve (mAUC) metric introduced in the FIRE dataset study. ResultsEyeLiner effectively aligns longitudinal image pairs from FIRE, SIGF and CORIS, as qualitatively evaluated through registration checkerboards and flicker animations. Quantitative results show that the MD decreased for this model after alignment from 321.32 to 3.74 pixels for FIRE, 9.86 to 2.03 pixels for CORIS, and 25.23 to 5.94 pixels for SIGF. We also obtain an AUC of 0.85, 0.94 and 0.84 on FIRE, CORIS and SIGF respectively, beating the current state-of-the-art SuperRetina (AUCFIRE=0.76, AUCCORIS=0.83, AUCSIGF=0.74). ConclusionsOur pipeline demonstrates improved alignment of image pairs in comparison to the current state-of-the-art methods on three separate datasets. We envision that this method will enable clinicians to align image pairs, and better visualize changes in disease over time.
Read full abstract