Abstract

Deep learning has achieved great success in image classification task when given sufficient labeled training images. However, in fundus image based glaucoma diagnosis, we often have very limited training data due to expensive cost in data labeling. Moreover, when facing a new application environment, it is difficult to train a network with limited labeled training images. In this case, some images from some auxiliary domains (i.e., source domain) could be exploited to improve the performance. Unfortunately, direct using the source domain data may not achieve promising performance for the domain of interest (i.e., target domain) due to reasons like distribution discrepancy between two domains. In this paper, focusing on glaucoma diagnosis, we propose a deep adversarial transfer learning method conditioned on label information to match the distributions of source and target domains, so that the labeled source images can be leveraged to improve the classification performance in the target domain. Different from the most existing adversarial transfer learning methods which consider marginal distribution matching only, we seek to match the label conditional distributions by handling images with different labels separately. We conduct experiments on three glaucoma datasets and adopt multiple evaluation metrics to verify the effectiveness of our proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call