Abstract
This paper proposes a novel approach for semi-supervised domain adaptation for holistic regression tasks, where a DNN predicts a continuous value given an input image x. The current literature generally lacks specific domain adaptation approaches for this task, as most of them mostly focus on classification. In the context of holistic regression, most of the real-world datasets not only exhibit a covariate (or domain) shift, but also a label gap—the target dataset may contain labels not included in the source dataset (and vice versa). We propose an approach tackling both covariate and label gap in a unified training framework. Specifically, a Generative Adversarial Network (GAN) is used to reduce covariate shift, and label gap is mitigated via label normalisation. To avoid overfitting, we propose a stopping criterion that simultaneously takes advantage of the Maximum Mean Discrepancy and the GAN Global Optimality condition. To restore the original label range—that was previously normalised—a handful of annotated images from the target domain are used. Our experimental results, run on 3 different datasets, demonstrate that our approach drastically outperforms the state-of-the-art across the board. Specifically, for the cell counting problem, the mean squared error (MSE) is reduced from 759 to 5.62; in the case of the pedestrian dataset, our approach lowered the MSE from 131 to 1.47. For the last experimental setup, we borrowed a task from plant biology, i.e., counting the number of leaves in a plant, and we ran two series of experiments, showing the MSE is reduced from 2.36 to 0.88 (intra-species), and from 1.48 to 0.6 (inter-species).
Highlights
According to [1], domain adaptation methods can be classified based on the relation between the label sets of the source and target domains
We compare our approach with DANN [5], as it is another approach in literature that can be applied to holistic counting
Together with the Domain Adaptation (DA) results, we report the upper bound (UB) and the lower bound (LB) results: in this context, UB is obtained by feeding the pretrained model
Summary
According to [1], domain adaptation methods can be classified based on the relation between the label sets of the source and target domains. Let YS and YT be the label sets for the source and target domains, domain adaptation algorithms can be classified as: closed set (YS = YT), open set (YS ∩ YT = ∅), partial (YT ⊂ YS), and universal (no prior knowledge of the label sets is available). When a model f (xs) is trained on a (source) dataset XS to perform a task T , we want the same model to generalise on a different (target) dataset XT. Domain adaptation is challenged by covariate (or domain) shift: the marginal distributions of source DS and target DT datasets are different, i.e., DS = DT [2]. In this paper we investigate DA for holistic regression
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.