Abstract

Many medical image datasets have been collected without proper annotations for deep learning training. In this paper, we propose a novel unsupervised domain adaptation framework with adversarial learning to minimize the annotation efforts. Our framework employs a task specific network, i.e., fully convolutional network (FCN), for spatial density prediction. Moreover, we employ a domain discriminator, in which adversarial learning is adopted to align the less-annotated target domain features with the well-annotated source domain features in the feature space. We further propose a novel training strategy for the adversarial learning by coupling data from source and target domains and alternating the subnet updates. We employ the public CBIS-DDSM dataset as the source domain, and perform two sets of experiments on two target domains (i.e., the public INbreast dataset and a self-collected dataset), respectively. Experimental results suggest consistent and comparable performance improvement over the state-of-the-art methods. Our proposed training strategy is also proved to converge much faster.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call