Abstract
The core problem of zero-shot learning (ZSL) is extracting concise feature from cumbersome and intricate semantic space. While semantic vectors were mainly used to constrain the distance between different classes in embedding space in existing ZSL methods, we first emphasize the importance of the relationship between different dimensions in semantic space. In this paper, we propose a novel inductive approach, Multiple Semantic Subspaces Network (MSSN), to simplify the complex and intractable semantic features. Our method generates multiple disentangled subfeatures via direct sum decomposition, which not only retain completely semantic information, but also obtain simple independent hierarchical features. Specially, instead of original space, using projection subspace to map embedding space can reduce the difficulty of model optimization and enhance the generalization ability of the model. Full experiments are achieved on almost all datasets and comparision with many algorithms which exist in the present latest literature. Compared with nineteen competitors, the results show that our model outperforms the state-of-the-art on conventional ZSL setting and has a very competitive performance on generalized ZSL setting.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.