Abstract
Unsupervised domain adaptation aims to transfer domain knowledge from existing well-defined tasks to new ones where labels are unavailable. In the real-world applications, domain discrepancy is usually uncontrollable especially for multi-modality data. Therefore, it is significantly motivated to deal with a multi-modality domain adaptation task. As labels are unavailable in a target domain, how to learn semantic multi-modality representations and successfully adapt the classifier from a source to the target domain remain open challenges in a multi-modality domain adaptation task. To deal with these issues, we propose a multi-modality adversarial network (MMAN), which applies stacked attention to learn semantic multi-modality representations and reduces domain discrepancy via adversarial training. Unlike the previous domain adaptation methods, which cannot make full use of source domain categories information, multi-channel constraint is employed to capture fine-grained categories of knowledge that could enhance the discrimination of target samples and boost target performance on single-modality and multi-modality domain adaptation problems. We apply the proposed MMAN to two applications including cross-domain object recognition and cross-domain social event recognition. The extensive experimental evaluations demonstrate the effectiveness of the proposed model for unsupervised domain adaptation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.