Abstract

Diabetic retinopathy (DR) grading from fundus images has attracted increasing interest in both academic and industrial communities. Most convolutional neural network-based algorithms treat DR grading as a classification task via image-level annotations. However, these algorithms have not fully explored the valuable information in the DR-related lesions. In this article, we present a robust framework, which collaboratively utilizes patch-level and image-level annotations, for DR severity grading. By an end-to-end optimization, this framework can bidirectionally exchange the fine-grained lesion and image-level grade information. As a result, it exploits more discriminative features for DR grading. The proposed framework shows better performance than the recent state-of-the-art algorithms and three clinical ophthalmologists with over nine years of experience. By testing on datasets of different distributions (such as label and camera), we prove that our algorithm is robust when facing image quality and distribution variations that commonly exist in real-world practice. We inspect the proposed framework through extensive ablation studies to indicate the effectiveness and necessity of each motivation. The code and some valuable annotations are now publicly available.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.