Abstract

AbstractDomain Generalization is a challenging problem in deep learning especially in medical image analysis because of the huge diversity between different datasets. Existing papers in the literature tend to optimize performance on single target domains, without regards to model generalizability on other domains or distributions. High discrepancy in the number of images and major domain shifts, can therefore cause single-source trained models to under-perform during testing. In this paper, we address the problem of domain generalization in Diabetic Retinopathy (DR) classification. The baseline for comparison is set as joint training on different datasets, followed by testing on each dataset individually. We therefore introduce a method that encourages seeking a flatter minima during training while imposing a regularization. This reduces gradient variance from different domains and therefore yields satisfactory results on out-of-domain DR classification. We show that adopting DR-appropriate augmentations enhances model performance and in-domain generalizability. By performing our evaluation on 4 open-source DR datasets, we show that the proposed domain generalization method outperforms separate and joint training strategies as well as well-established methods. Source Code is available at https://github.com/BioMedIA-MBZUAI/DRGen.KeywordsDeep learningDiabetic retinopathyDomain generalizationRegularization

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.