Abstract
Domain adaptation (DA) has been widely investigated as a framework to alleviate the laborious task of data annotation for image segmentation. Most DA investigations operate under the unsupervised domain adaptation (UDA) setting, where the modeler has access to a large cohort of source domain labeled data and target domain data with no annotations. UDA techniques exhibit poor performance when the domain gap, i.e., the distribution overlap between the data in source and target domain is large. We hypothesize that the DA performance gap can be improved with the availability of a small subset of labeled target domain data. In this paper, we systematically investigate the impact of varying amounts of labeled target domain data on the performance gap for DA. We specifically focus on the problem of segmenting eye-regions from eye images collected using two different head mounted display systems. Source domain is comprised of 12,759 eye images with annotations and target domain is comprised of 4,629 images with varying amounts of annotations. Experiments are performed to compare the impact on DA performance gap under three schemes: unsupervised (UDA), supervised (SDA) and semi-supervised (SSDA) domain adaptation. We evaluate these schemes by measuring the mean intersection-over-union (mIoU) metric. Using only 200 samples of labeled target data under SDA and SSDA schemes, we show an improvement in mIoU of 5.4% and 6.6% respectively, over mIoU of 81.7% under UDA. By using all available labeled target data, models trained under SSDA achieve a competitive mIoU score of 89.8%. Overall, we conclude that availability of a small subset of target domain data with annotations can substantially improve DA performance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.