Abstract
Learning a visual category with few labeled samples is a challenging problem in machine learning, which has motivated the multi-source adaptation learning technique, which exploits to transfer multiple prior discriminative models to target domain. Under this paradigm, however, different visual features at hand cannot be effectively exploited to represent a target object with versatility for boosting the adaptation performance. Besides, existing multi-source adaptation schemes mostly focus on either visual understanding or feature learning, independently. This may lead to the so-called semantic gap between the low-level features and the high-level semantics. Last but not the least, how to discriminatively select the prior models is yet another unresolved issue. To address these issues, we propose a novel co-regression framework with Multi-Source adaptation Multi-Feature Representation (MSMFR) for visual recognition, which jointly explores robust multi-feature co-regression, latent space learning, and representative sources selection, by integrating them into a unified framework for joint visual understanding and feature learning. Specifically, MSMFR conducts the multi-feature co-regression by simultaneously uncovering multiple latent spaces and minimizing the co-regression residual by taking correlations among multiple feature representations into account. Furthermore, MSMFR also automatically selects the representative (or discriminative) source models for each target feature representation via formulating a row-sparsity pursuit problem. The validity of our method is examined by three challenging visual domain adaptation tasks on several benchmark datasets, which demonstrate the superiority of our method in comparison with several state-of-the-arts.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.