Abstract

The core problem of zero-shot learning (ZSL) is extracting concise feature from cumbersome and intricate semantic space. While semantic vectors were mainly used to constrain the distance between different classes in embedding space in existing ZSL methods, we first emphasize the importance of the relationship between different dimensions in semantic space. In this paper, we propose a novel inductive approach, Multiple Semantic Subspaces Network (MSSN), to simplify the complex and intractable semantic features. Our method generates multiple disentangled subfeatures via direct sum decomposition, which not only retain completely semantic information, but also obtain simple independent hierarchical features. Specially, instead of original space, using projection subspace to map embedding space can reduce the difficulty of model optimization and enhance the generalization ability of the model. Full experiments are achieved on almost all datasets and comparision with many algorithms which exist in the present latest literature. Compared with nineteen competitors, the results show that our model outperforms the state-of-the-art on conventional ZSL setting and has a very competitive performance on generalized ZSL setting.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call