Abstract

Zero-shot learning (ZSL) aims to recognize novel classes that have no labeled samples during the training phase, which leads to the domain shift problem. In reality, there exists a large number of compounded unlabeled samples. Therefore, it is crucial to accurately estimate the data distribution of these compounded unlabeled samples and improve the performance of ZSL. This paper proposes a zero-shot learning boosting framework. Specifically, ZSL is transformed into a co-training problem between the data distribution estimation of the unlabeled samples and ZSL. The data distribution estimation is modeled as concept-constrained clustering. Furthermore, we design an alternative optimization strategy to realize mutual guidance between the two processes. Finally, systematic experiments verify the effectiveness of the proposed concept-constrained clustering for alleviating the domain shift problem in ZSL and the universality of the proposed framework for boosting different base ZSL models.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.