Abstract

Deep neural networks (DNNs) have facilitated the development of computer-aided diagnosis (CAD) systems for fundus diseases, helping ophthalmologists to reduce missed diagnoses and misdiagnosis rates. However, the majority of CAD systems are data-driven, but lack the prior medical knowledge that can be performance-friendly. In this regard, we innovatively proposed a human-in-the-loop (HITL) CAD system by leveraging ophthalmologists’ eye-tracking information. Concretely, the HITL CAD system was implemented on the multi-instance learning (MIL), where clinicians’ gaze maps were beneficial to cherry-pick diagnosis-related instances. Furthermore, the dual-cross-attention MIL (DCAMIL) network was utilized to curb the adverse effects of noisy instances. Meanwhile, both the sequence augmentation (SA) module and the domain adversarial network (DAN) were introduced to enrich and standardize the instances in the training bag, respectively, thereby enhancing the robustness of our method. We conduct comparative experiments on our newly-constructed datasets (namely, AMD-Gaze and DR-Gaze) for the AMD and early DR detection, respectively. Rigorous experiments demonstrate the feasibility of our HITL CAD system and the superiority of the proposed DCAMIL, which fully exploits ophthalmologists’ eye-tracking information. These investigations indicate that clinicians’ gaze maps, as prior medical knowledge, is potential to contribute to the CAD systems of clinical diseases.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.